赞
踩
一种被认为是模块参数的Tensor。
参数是Tensor子类,当与Modules一起使用时具有非常特殊的属性- 当它们被指定为模块属性时,它们会自动添加到其参数列表中,并且将出现在例如parameters()迭代器中。分配张量没有这种效果。这是因为人们可能希望在模型中缓存一些临时状态,如RNN的最后隐藏状态。如果没有这样的课程Parameter,这些临时工作也会被注册。
参数:
data(Tensor) - 参数张量。
requires_grad(bool,optional) - 如果参数需要渐变。有关详细信息,请参阅 从后面排除子图。默认值:True
##class torch.nn.Sequential(*args)
A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.
To make it easier to understand, given is a small example:
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
model = nn.Sequential(OrderedDict([
(‘conv1’, nn.Conv2d(1,20,5)),
(‘relu1’, nn.ReLU()),
(‘conv2’, nn.Conv2d(20,64,5)),
(‘relu2’, nn.ReLU())
]))
import torch
N, D_in, H, D_out = 64, 1000, 100, 10
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
loss_fn = torch.nn.MSELoss(reduction=‘sum’)
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the call operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true # values of y, and the loss function returns a Tensor containing the # loss. loss = loss_fn(y_pred, y) print(t, loss.item()) # Zero the gradients before running the backward pass. model.zero_grad() # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Tensors with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() # Update the weights using gradient descent. Each parameter is a Tensor, so # we can access its gradients like we did before. with torch.no_grad(): for param in model.parameters(): param -= learning_rate * param.grad
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。