PyTorch: What is the difference between tensor.cuda() and tensor.to(torch.device(quot;cuda:0quot;))?(PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0)) 有什么区别?)
问题描述
在 PyTorch 中,以下两种方法向 GPU 发送张量(或模型)有什么区别:
In PyTorch, what is the difference between the following two methods in sending a tensor (or model) to GPU:
设置:
X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) # X = model()
X = torch.DoubleTensor(X)
方法一 | 方法二 |
---|---|
X.cuda() | device = torch.device("cuda:0") X = X.to(device) |
(我真的不需要详细解释后端发生的事情,只是想知道它们是否本质上都在做同样的事情)
(I don't really need a detailed explanation of what is happening in the backend, just want to know if they are both essentially doing the same thing)
推荐答案
两者没有区别.
pytorch 的早期版本具有 .cuda()
和 .cpu()
方法来将张量和模型从 cpu 移动到 gpu 并返回.然而,这使得代码编写有点麻烦:
There is no difference between the two.
Early versions of pytorch had .cuda()
and .cpu()
methods to move tensors and models from cpu to gpu and back. However, this made code writing a bit cumbersome:
if cuda_available:
x = x.cuda()
model.cuda()
else:
x = x.cpu()
model.cpu()
后来的版本引入了 .to()
基本上以优雅的方式处理所有事情:
Later versions introduced .to()
that basically takes care of everything in an elegant way:
device = torch.device('cuda') if cuda_available else torch.device('cpu')
x = x.to(device)
model = model.to(device)
这篇关于PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0")) 有什么区别?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:PyTorch:tensor.cuda() 和 tensor.to(torch.device(“cuda:0&qu


- CTR 中的 AES 如何用于 Python 和 PyCrypto? 2022-01-01
- 使用 Cython 将 Python 链接到共享库 2022-01-01
- 使用公司代理使Python3.x Slack(松弛客户端) 2022-01-01
- YouTube API v3 返回截断的观看记录 2022-01-01
- ";find_element_by_name(';name';)";和&QOOT;FIND_ELEMENT(BY NAME,';NAME';)";之间有什么区别? 2022-01-01
- 计算测试数量的Python单元测试 2022-01-01
- 检查具有纬度和经度的地理点是否在 shapefile 中 2022-01-01
- 如何使用PYSPARK从Spark获得批次行 2022-01-01
- 我如何透明地重定向一个Python导入? 2022-01-01
- 我如何卸载 PyTorch? 2022-01-01