当前位置:   article > 正文

pytorch自学-cnn_model = torch.nn.sequential( torch.nn.linear(4, 12

model = torch.nn.sequential( torch.nn.linear(4, 128), torch.nn.relu(), torch

torch.nn.Parameter

一种被认为是模块参数的Tensor

参数是Tensor子类,当与Modules一起使用时具有非常特殊的属性- 当它们被指定为模块属性时,它们会自动添加到其参数列表中,并且将出现在例如parameters()迭代器中。分配张量没有这种效果。这是因为人们可能希望在模型中缓存一些临时状态,如RNN的最后隐藏状态。如果没有这样的课程Parameter,这些临时工作也会被注册。

参数:
data(Tensor) - 参数张量。
requires_grad(bool,optional) - 如果参数需要渐变。有关详细信息,请参阅 从后面排除子图。默认值:True

##class torch.nn.Sequential(*args)
A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.

To make it easier to understand, given is a small example:

Example of using Sequential

model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)

Example of using Sequential with OrderedDict

model = nn.Sequential(OrderedDict([
(‘conv1’, nn.Conv2d(1,20,5)),
(‘relu1’, nn.ReLU()),
(‘conv2’, nn.Conv2d(20,64,5)),
(‘relu2’, nn.ReLU())
]))
import torch

N is batch size; D_in is input dimension;

H is hidden dimension; D_out is output dimension.

N, D_in, H, D_out = 64, 1000, 100, 10

Create random Tensors to hold inputs and outputs

x = torch.randn(N, D_in)
y = torch.randn(N, D_out)

Use the nn package to define our model as a sequence of layers. nn.Sequential

is a Module which contains other Modules, and applies them in sequence to

produce its output. Each Linear Module computes output from input using a

linear function, and holds internal Tensors for its weight and bias.

model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)

The nn package also contains definitions of popular loss functions; in this case we will use Mean Squared Error (MSE) as our loss function.

loss_fn = torch.nn.MSELoss(reduction=‘sum’)

learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the call operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)

# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.item())

# Zero the gradients before running the backward pass.
model.zero_grad()

# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()

# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
    for param in model.parameters():
        param -= learning_rate * param.grad
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/IT小白/article/detail/362546?site
推荐阅读
相关标签
  

闽ICP备14008679号