Why do I have to call MPI.Finalize() inside the destructor?(为什么我必须在析构函数中调用MPI.Finalize()?)
问题描述
我目前正在尝试理解mpi4py。我设置mpi4py.rc.initialize = False
和mpi4py.rc.finalize = False
是因为我不明白为什么我们需要自动初始化和结束。默认行为是在导入MPI时调用MPI.Init()
。我认为这是因为对于每个级别,都在运行Python解释器的一个实例,每个实例都将运行整个脚本,但这只是猜测。归根结底,我喜欢直截了当地说。
现在这引入了一些问题。我有此代码
import numpy as np
import mpi4py
mpi4py.rc.initialize = False # do not initialize MPI automatically
mpi4py.rc.finalize = False # do not finalize MPI automatically
from mpi4py import MPI # import the 'MPI' module
import h5py
class DataGenerator:
def __init__(self, filename, N, comm):
self.comm = comm
self.file = h5py.File(filename, 'w', driver="mpio", comm=comm)
# Create datasets
self.data_ds= self.file.create_dataset("indices", (N,1), dtype='i')
def __del__(self):
self.file.close()
if __name__=='__main__':
MPI.Init()
world = MPI.COMM_WORLD
world_rank = MPI.COMM_WORLD.rank
filename = "test.hdf5"
N = 10
data_gen = DataGenerator(filename, N, comm=world)
MPI.Finalize()
这将导致
$ mpiexec -n 4 python test.py
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort. [eu-login-04:01559] Local abort after MPI_FINALIZE started completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort. [eu-login-04:01560] Local abort after MPI_FINALIZE started completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
-------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
*** The MPI_Barrier() function was called after MPI_FINALIZE was invoked.
*** This is disallowed by the MPI standard.
*** Your MPI job will now abort. [eu-login-04:01557] Local abort after MPI_FINALIZE started completed successfully, but am not able to aggregate error messages, and not able to guarantee that all other processes were killed!
-------------------------------------------------------------------------- mpiexec detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was:
Process name: [[15050,1],3] Exit code: 1
--------------------------------------------------------------------------
我对这里发生的事情有点困惑。如果我将MPI.Finalize()
移到析构函数的末尾,它可以正常工作。
并不是说我也使用h5py,它使用MPI进行并行化。所以我这里有一个并行文件IO。并不是说h5py需要使用MPI支持进行编译。您可以通过设置虚拟环境并运行pip install --no-binary=h5py h5py
轻松完成此操作。
推荐答案
您编写它的方式,data_gen
在Main函数返回之前一直有效。但是您在函数内调用MPI.Finalize
。因此,析构函数在Finalize之后运行。h5py.File.close
方法似乎在内部调用MPI.Comm.Barrier
。禁止在Finalize之后调用此函数。
如果您希望拥有显式控制,请确保在调用MPI.Finalize
之前销毁所有对象。当然,如果某些对象只由垃圾回收器销毁,而不是引用计数器销毁,那么即使这样也可能不够。
若要避免此情况,请使用上下文管理器而不是析构函数。
class DataGenerator:
def __init__(self, filename, N, comm):
self.comm = comm
self.file = h5py.File(filename, 'w', driver="mpio", comm=comm)
# Create datasets
self.data_ds= self.file.create_dataset("indices", (N,1), dtype='i')
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.file.close()
if __name__=='__main__':
MPI.Init()
world = MPI.COMM_WORLD
world_rank = MPI.COMM_WORLD.rank
filename = "test.hdf5"
N = 10
with DataGenerator(filename, N, comm=world) as data_gen:
pass
MPI.Finalize()
这篇关于为什么我必须在析构函数中调用MPI.Finalize()?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:为什么我必须在析构函数中调用MPI.Finalize()?
- python check_output 失败,退出状态为 1,但 Popen 适用于相同的命令 2022-01-01
- 如何在 python3 中将 OrderedDict 转换为常规字典 2022-01-01
- 沿轴计算直方图 2022-01-01
- 如何将一个类的函数分成多个文件? 2022-01-01
- python-m http.server 443--使用SSL? 2022-01-01
- pytorch 中的自适应池是如何工作的? 2022-07-12
- 使用Heroku上托管的Selenium登录Instagram时,找不到元素';用户名'; 2022-01-01
- 分析异常:路径不存在:dbfs:/databricks/python/lib/python3.7/site-packages/sampleFolder/data; 2022-01-01
- 如何在 Python 的元组列表中对每个元组中的第一个值求和? 2022-01-01
- padding='same' 转换为 PyTorch padding=# 2022-01-01