赞
踩
设置空loss:
loss=torch.tensor(0).float().to(outs[0].device)
这个可以试试:
regression_losses.append(torch.tensor(0).float().cuda())
reg_loss=torch.stack(regression_losses).mean(dim=0, keepdim=True)
- 自定义MSEloss实现:
-
- class My_loss(nn.Module):
- def __init__(self):
- super().__init__()
-
- def forward(self, x, y):
- return torch.mean(torch.pow((x - y), 2))
使用:
- criterion = My_loss()
-
- loss = criterion(outputs, targets)
将Loss视作单独的层,在forward函数里写明loss的计算方式,无需定义backward
- class MyLoss(nn.Module):
-
- def __init__(self):
-
- super(MyLoss, self).__init__()
-
-
-
- def forward(self, pred, truth):
-
- return torch.mean(torc
Copyright © 2003-2013 www.wpsshop.cn 版权所有,并保留所有权利。