PyTorch: Extract learned weights correctly(PyTorch:正确提取学习到的权重)
问题描述
我试图从线性层中提取权重,但它们似乎没有改变,尽管错误单调下降(即正在训练).打印权重的总和,没有任何反应,因为它保持不变:
<块引用>np.sum(model.fc2.weight.data.numpy())
以下是代码片段:
def train(epochs):模型.train()对于范围内的纪元(1,纪元+1):# 在训练集上训练打印(np.sum(model.fc2.weight.data.numpy()))对于枚举(train_loader)中的batch_idx,(数据,目标):数据,目标 = 变量(数据),变量(数据)optimizer.zero_grad()输出 = 模型(数据)损失=标准(输出,目标)损失.向后()优化器.step()和
# 定义模型类网络(nn.Module):def __init__(self):super(Net, self).__init__()# 仿射操作:y = Wx + bself.fc1 = nn.Linear(100, 80, 偏差=假)init.normal(self.fc1.weight,mean=0,std=1)self.fc2 = nn.Linear(80, 87)self.fc3 = nn.Linear(87, 94)self.fc4 = nn.Linear(94, 100)def forward(self, x):x = self.fc1(x)x = F.relu(self.fc2(x))x = F.relu(self.fc3(x))x = F.relu(self.fc4(x))返回 x
也许我查看了错误的参数,尽管我检查了文档.感谢您的帮助!
解决方案 使用 model.parameters()
获得任何模型或层的可训练权重.记得放在list()里面,不然打印不出来.
以下代码片段有效
<预><代码>>>>进口火炬>>>将 torch.nn 导入为 nn>>>l = nn.Linear(3,5)>>>w = 列表(l.parameters())>>>瓦
I am trying to extract the weights from a linear layer, but they do not appear to change, although error is dropping monotonously (i.e. training is happening). Printing the weights' sum, nothing happens because it stays constant:
np.sum(model.fc2.weight.data.numpy())
Here are the code snippets:
def train(epochs):
model.train()
for epoch in range(1, epochs+1):
# Train on train set
print(np.sum(model.fc2.weight.data.numpy()))
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data), Variable(data)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
and
# Define model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# an affine operation: y = Wx + b
self.fc1 = nn.Linear(100, 80, bias=False)
init.normal(self.fc1.weight, mean=0, std=1)
self.fc2 = nn.Linear(80, 87)
self.fc3 = nn.Linear(87, 94)
self.fc4 = nn.Linear(94, 100)
def forward(self, x):
x = self.fc1(x)
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.relu(self.fc4(x))
return x
Maybe I am looking on the wrong parameters, although I checked the docs. Thanks for your help!
Use model.parameters()
to get trainable weight for any model or layer. Remember to put it inside list(), or you cannot print it out.
The following code snip worked
>>> import torch
>>> import torch.nn as nn
>>> l = nn.Linear(3,5)
>>> w = list(l.parameters())
>>> w
这篇关于PyTorch:正确提取学习到的权重的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:PyTorch:正确提取学习到的权重


- 我如何卸载 PyTorch? 2022-01-01
- 使用 Cython 将 Python 链接到共享库 2022-01-01
- ";find_element_by_name(';name';)";和&QOOT;FIND_ELEMENT(BY NAME,';NAME';)";之间有什么区别? 2022-01-01
- YouTube API v3 返回截断的观看记录 2022-01-01
- CTR 中的 AES 如何用于 Python 和 PyCrypto? 2022-01-01
- 计算测试数量的Python单元测试 2022-01-01
- 检查具有纬度和经度的地理点是否在 shapefile 中 2022-01-01
- 如何使用PYSPARK从Spark获得批次行 2022-01-01
- 使用公司代理使Python3.x Slack(松弛客户端) 2022-01-01
- 我如何透明地重定向一个Python导入? 2022-01-01