How to clear Cuda memory in PyTorch(如何在 PyTorch 中清除 Cuda 内存)
问题描述
我正在尝试获取我已经训练过的神经网络的输出.输入是大小为 300x300 的图像.我使用的批量大小为 1,但在我成功获得 25 个图像的输出后,我仍然收到 CUDA 错误:内存不足
错误.
I am trying to get the output of a neural network which I have already trained. The input is an image of the size 300x300. I am using a batch size of 1, but I still get a CUDA error: out of memory
error after I have successfully got the output for 25 images.
我在网上搜索了一些解决方案并遇到了torch.cuda.empty_cache()
.但这似乎仍然没有解决问题.
I searched for some solutions online and came across torch.cuda.empty_cache()
. But this still doesn't seem to solve the problem.
这是我正在使用的代码.
This is the code I am using.
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
train_x = torch.tensor(train_x, dtype=torch.float32).view(-1, 1, 300, 300)
train_x = train_x.to(device)
dataloader = torch.utils.data.DataLoader(train_x, batch_size=1, shuffle=False)
right = []
for i, left in enumerate(dataloader):
print(i)
temp = model(left).view(-1, 1, 300, 300)
right.append(temp.to('cpu'))
del temp
torch.cuda.empty_cache()
这个for循环
每次运行25次,才出现内存错误.
This for loop
runs for 25 times every time before giving the memory error.
每次,我都会在网络中发送一个新图像进行计算.因此,在循环中的每次迭代之后,我真的不需要将先前的计算结果存储在 GPU 中.有什么办法可以做到这一点吗?
Every time, I am sending a new image in the network for computation. So, I don't really need to store the previous computation results in the GPU after every iteration in the loop. Is there any way to achieve this?
任何帮助将不胜感激.谢谢.
Any help will be appreciated. Thanks.
推荐答案
我发现我哪里出错了.我将解决方案发布为其他可能遇到相同问题的人的答案.
I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem.
基本上,PyTorch 的作用是每当我通过网络传递数据并将计算存储在 GPU 内存上时,它会创建一个计算图,以防我想在反向传播期间计算梯度.但由于我只想执行前向传播,我只需要为我的模型指定 torch.no_grad()
.
Basically, what PyTorch does is that it creates a computational graph whenever I pass the data through my network and stores the computations on the GPU memory, in case I want to calculate the gradient during backpropagation. But since I only wanted to perform a forward propagation, I simply needed to specify torch.no_grad()
for my model.
因此,我的代码中的 for 循环可以重写为:
Thus, the for loop in my code could be rewritten as:
for i, left in enumerate(dataloader):
print(i)
with torch.no_grad():
temp = model(left).view(-1, 1, 300, 300)
right.append(temp.to('cpu'))
del temp
torch.cuda.empty_cache()
为我的模型指定 no_grad()
告诉 PyTorch 我不想存储任何以前的计算,从而释放我的 GPU 空间.
Specifying no_grad()
to my model tells PyTorch that I don't want to store any previous computations, thus freeing my GPU space.
这篇关于如何在 PyTorch 中清除 Cuda 内存的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:如何在 PyTorch 中清除 Cuda 内存


- YouTube API v3 返回截断的观看记录 2022-01-01
- 使用公司代理使Python3.x Slack(松弛客户端) 2022-01-01
- 使用 Cython 将 Python 链接到共享库 2022-01-01
- 检查具有纬度和经度的地理点是否在 shapefile 中 2022-01-01
- 计算测试数量的Python单元测试 2022-01-01
- CTR 中的 AES 如何用于 Python 和 PyCrypto? 2022-01-01
- 我如何透明地重定向一个Python导入? 2022-01-01
- 如何使用PYSPARK从Spark获得批次行 2022-01-01
- 我如何卸载 PyTorch? 2022-01-01
- ";find_element_by_name(';name';)";和&QOOT;FIND_ELEMENT(BY NAME,';NAME';)";之间有什么区别? 2022-01-01