Pytorch: how to add L1 regularizer to activations?(Pytorch:如何将 L1 正则化器添加到激活中?)
问题描述
我想将 L1 正则化器添加到 ReLU 的激活输出中.更一般地说,如何将正则化器仅添加到网络中的特定层?
I would like to add the L1 regularizer to the activations output from a ReLU. More generally, how does one add a regularizer only to a particular layer in the network?
相关资料:
这个类似的帖子是指添加L2 正则化,但它似乎给网络的所有层添加了正则化惩罚.
This similar post refers to adding L2 regularization, but it appears to add the regularization penalty to all layers of the network.
nn.modules.loss.L1Loss()
似乎很相关,但我还不明白如何使用它.
nn.modules.loss.L1Loss()
seems relevant, but I do not yet understand how to use this.
遗留模块 L1Penalty
似乎也很相关,但为什么它已被弃用?
The legacy module L1Penalty
seems relevant also, but why has it been deprecated?
推荐答案
您可以这样做:
- 在您的模块的前向返回最终输出和要对其应用 L1 正则化的层的输出中
loss
变量将是输出 w.r.t. 的交叉熵损失的总和.目标和 L1 惩罚.
- In your Module's forward return final output and layers' output for which you want to apply L1 regularization
loss
variable will be sum of cross entropy loss of output w.r.t. targets and L1 penalties.
这是一个示例代码
import torch
from torch.autograd import Variable
from torch.nn import functional as F
class MLP(torch.nn.Module):
def __init__(self):
super(MLP, self).__init__()
self.linear1 = torch.nn.Linear(128, 32)
self.linear2 = torch.nn.Linear(32, 16)
self.linear3 = torch.nn.Linear(16, 2)
def forward(self, x):
layer1_out = F.relu(self.linear1(x))
layer2_out = F.relu(self.linear2(layer1_out))
out = self.linear3(layer2_out)
return out, layer1_out, layer2_out
batchsize = 4
lambda1, lambda2 = 0.5, 0.01
model = MLP()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
# usually following code is looped over all batches
# but let's just do a dummy batch for brevity
inputs = Variable(torch.rand(batchsize, 128))
targets = Variable(torch.ones(batchsize).long())
optimizer.zero_grad()
outputs, layer1_out, layer2_out = model(inputs)
cross_entropy_loss = F.cross_entropy(outputs, targets)
all_linear1_params = torch.cat([x.view(-1) for x in model.linear1.parameters()])
all_linear2_params = torch.cat([x.view(-1) for x in model.linear2.parameters()])
l1_regularization = lambda1 * torch.norm(all_linear1_params, 1)
l2_regularization = lambda2 * torch.norm(all_linear2_params, 2)
loss = cross_entropy_loss + l1_regularization + l2_regularization
loss.backward()
optimizer.step()
这篇关于Pytorch:如何将 L1 正则化器添加到激活中?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:Pytorch:如何将 L1 正则化器添加到激活中?
- 如何使用PYSPARK从Spark获得批次行 2022-01-01
- 使用 Cython 将 Python 链接到共享库 2022-01-01
- 使用公司代理使Python3.x Slack(松弛客户端) 2022-01-01
- YouTube API v3 返回截断的观看记录 2022-01-01
- CTR 中的 AES 如何用于 Python 和 PyCrypto? 2022-01-01
- ";find_element_by_name(';name';)";和&QOOT;FIND_ELEMENT(BY NAME,';NAME';)";之间有什么区别? 2022-01-01
- 我如何透明地重定向一个Python导入? 2022-01-01
- 检查具有纬度和经度的地理点是否在 shapefile 中 2022-01-01
- 我如何卸载 PyTorch? 2022-01-01
- 计算测试数量的Python单元测试 2022-01-01