PyTorch how to compute second order Jacobian?(PyTorch 如何计算二阶雅可比行列式?)
问题描述
我有一个计算向量的神经网络u
.我想计算关于输入 x
(单个元素)的一阶和二阶雅可比.
I have a neural network that's computing a vector quantity u
. I'd like to compute first and second-order jacobians with respect to the input x
, a single element.
有人知道如何在 PyTorch 中做到这一点吗?下面是我项目中的代码片段:
Would anybody know how to do that in PyTorch? Below, the code snippet from my project:
import torch
import torch.nn as nn
class PINN(torch.nn.Module):
def __init__(self, layers:list):
super(PINN, self).__init__()
self.linears = nn.ModuleList([])
for i, dim in enumerate(layers[:-2]):
self.linears.append(nn.Linear(dim, layers[i+1]))
self.linears.append(nn.ReLU())
self.linears.append(nn.Linear(layers[-2], layers[-1]))
def forward(self, x):
for layer in self.linears:
x = layer(x)
return x
然后我实例化我的网络:
I then instantiate my network:
n_in = 1
units = 50
q = 500
pinn = PINN([n_in, units, units, units, q+1])
pinn
哪个返回
PINN(
(linears): ModuleList(
(0): Linear(in_features=1, out_features=50, bias=True)
(1): ReLU()
(2): Linear(in_features=50, out_features=50, bias=True)
(3): ReLU()
(4): Linear(in_features=50, out_features=50, bias=True)
(5): ReLU()
(6): Linear(in_features=50, out_features=501, bias=True)
)
)
然后我计算 FO 和 SO jacobians
Then I compute both FO and SO jacobians
x = torch.randn(1, requires_grad=False)
u_x = torch.autograd.functional.jacobian(pinn, x, create_graph=True)
print("First Order Jacobian du/dx of shape {}, and features
{}".format(u_x.shape, u_x)
u_xx = torch.autograd.functional.jacobian(lambda _: u_x, x)
print("Second Order Jacobian du_x/dx of shape {}, and features
{}".format(u_xx.shape, u_xx)
退货
First Order Jacobian du/dx of shape torch.Size([501, 1]), and features
tensor([[-0.0310],
[ 0.0139],
[-0.0081],
[-0.0248],
[-0.0033],
[ 0.0013],
[ 0.0040],
[ 0.0273],
...
[-0.0197]], grad_fn=<ViewBackward>)
Second Order Jacobian du/dx of shape torch.Size([501, 1, 1]), and features
tensor([[[0.]],
[[0.]],
[[0.]],
[[0.]],
...
[[0.]]])
如果u_xx
不依赖于x
,它不应该是一个None
向量吗?
Should not u_xx
be a None
vector if it didn't depend on x
?
提前致谢
推荐答案
所以正如@jodag 在他的评论中提到的,ReLU
为空或线性,它的梯度是恒定的(除了在 0
,这是一个罕见的事件),所以它的二阶导数为零.我将激活函数更改为 Tanh
,这最终允许我计算两次雅可比.
So as @jodag mentioned in his comment, ReLU
being null or linear, its gradient is constant (except on 0
, which is a rare event), so its second-order derivative is zero. I changed the activation function to Tanh
, which finally allows me to compute the jacobian twice.
最终代码是
import torch
import torch.nn as nn
class PINN(torch.nn.Module):
def __init__(self, layers:list):
super(PINN, self).__init__()
self.linears = nn.ModuleList([])
for i, dim in enumerate(layers[:-2]):
self.linears.append(nn.Linear(dim, layers[i+1]))
self.linears.append(nn.Tanh())
self.linears.append(nn.Linear(layers[-2], layers[-1]))
def forward(self, x):
for layer in self.linears:
x = layer(x)
return x
def compute_u_x(self, x):
self.u_x = torch.autograd.functional.jacobian(self, x, create_graph=True)
self.u_x = torch.squeeze(self.u_x)
return self.u_x
def compute_u_xx(self, x):
self.u_xx = torch.autograd.functional.jacobian(self.compute_u_x, x)
self.u_xx = torch.squeeze(self.u_xx)
return self.u_xx
然后在 PINN
的实例上调用 compute_u_xx(x)
并将 x.require_grad
设置为 True
得到我在那里.如何摆脱 torch.autograd.functional.jacobian
引入的无用维度仍有待理解...
Then calling compute_u_xx(x)
on an instance of PINN
with x.require_grad
set to True
gets me there. How to get rid of useless dimensions introduced by torch.autograd.functional.jacobian
remains to be understood though...
这篇关于PyTorch 如何计算二阶雅可比行列式?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!
本文标题为:PyTorch 如何计算二阶雅可比行列式?


- 计算测试数量的Python单元测试 2022-01-01
- 如何使用PYSPARK从Spark获得批次行 2022-01-01
- CTR 中的 AES 如何用于 Python 和 PyCrypto? 2022-01-01
- 我如何透明地重定向一个Python导入? 2022-01-01
- 我如何卸载 PyTorch? 2022-01-01
- 使用 Cython 将 Python 链接到共享库 2022-01-01
- 检查具有纬度和经度的地理点是否在 shapefile 中 2022-01-01
- 使用公司代理使Python3.x Slack(松弛客户端) 2022-01-01
- ";find_element_by_name(';name';)";和&QOOT;FIND_ELEMENT(BY NAME,';NAME';)";之间有什么区别? 2022-01-01
- YouTube API v3 返回截断的观看记录 2022-01-01