Pytorch quot;NCCL errorquot;: unhandled system error, NCCL version 2.4.8quot;(Pytorch“NCCL 错误:未处理的系统错误,NCCL 版本 2.4.8)
问题描述
我使用pytorch分布式训练我的模型.我有两个节点和每个节点两个gpu,我为一个节点运行代码:
I use pytorch to distributed training my model.I have two nodes and two gpu for each node, and I run the code for one node:
python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpu 2 --num-machines 2 --machine-rank 0 --dist-url tcp://192.168.**.***:8000
另一个:
python train_net.py --config-file configs/InstanceSegmentation/pointrend_rcnn_R_50_FPN_1x_coco.yaml --num-gpu 2 --num-machines 2 --machine-rank 1 --dist-url tcp://192.168.**.***:8000
但是另一个有 RuntimeError 问题
However the other has RuntimeError problem
global_rank 3 machine_rank 1 num_gpus_per_machine 2 local_rank 1
global_rank 2 machine_rank 1 num_gpus_per_machine 2 local_rank 0
Traceback (most recent call last):
File "train_net.py", line 109, in <module>
args=(args,),
File "/root/detectron2_repo/detectron2/engine/launch.py", line 49, in launch
daemon=False,
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/root/detectron2_repo/detectron2/engine/launch.py", line 72, in _distributed_worker
comm.synchronize()
File "/root/detectron2_repo/detectron2/utils/comm.py", line 79, in synchronize
dist.barrier()
File "/root/anaconda3/envs/PointRend/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 1489, in barrier
work = _default_pg.barrier()
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:410, unhandled system error, NCCL version 2.4.8
如果我把mask-rank=1改成mask-rank=0,那么就不会报错,但是不能分布式训练,有谁知道为什么会出现这个错误?
IF I change mask-rank = 1 to mask-rank = 0, then no error will be reported, but can't distributed training,Does anyone know why this error may occur?
推荐答案
许多原因都可能导致此问题,例如参见 1,2.添加行
A number of things can cause this issue, see for example 1, 2. Adding the line
import os
os.environ["NCCL_DEBUG"] = "INFO"
到您的脚本将记录导致错误的更具体的调试信息,为您提供更有用的错误消息给谷歌.
to your script will log more specific debug info leading up to the error, giving you a more helpful error message to google.
这篇关于Pytorch“NCCL 错误":未处理的系统错误,NCCL 版本 2.4.8"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Pytorch“NCCL 错误":未处理的系统错误,NCCL 版


- CTR 中的 AES 如何用于 Python 和 PyCrypto? 2022-01-01
- 我如何卸载 PyTorch? 2022-01-01
- 我如何透明地重定向一个Python导入? 2022-01-01
- 检查具有纬度和经度的地理点是否在 shapefile 中 2022-01-01
- ";find_element_by_name(';name';)";和&QOOT;FIND_ELEMENT(BY NAME,';NAME';)";之间有什么区别? 2022-01-01
- 如何使用PYSPARK从Spark获得批次行 2022-01-01
- 使用公司代理使Python3.x Slack(松弛客户端) 2022-01-01
- 使用 Cython 将 Python 链接到共享库 2022-01-01
- 计算测试数量的Python单元测试 2022-01-01
- YouTube API v3 返回截断的观看记录 2022-01-01