PyTorch深度学习实战 <http://www.ituring.com.cn/book/2456>

4　损失函数

import torch from torch.autograd import Variable import torch.nn as nn import
torch.nn.functional as F sample = Variable(torch.ones(2,2)) a=torch.Tensor(2,2)
a[0,0]=0 a[0,1]=1 a[1,0]=2 a[1,1]=3 target = Variable (a)
sample 的值为：[[1,1],[1,1]]。

target 的值为：[[0,1],[2,3]]。

4.1　nn.L1Loss

L1Loss 计算方法很简单，取预测值和真实值的绝对误差的平均数即可。
criterion = nn.L1Loss() loss = criterion(sample, target) print(loss)

4.2　nn.SmoothL1Loss

SmoothL1Loss 也叫作 Huber Loss，误差在 (-1,1) 上是平方损失，其他情况是 L1 损失。

criterion = nn.SmoothL1Loss() loss = criterion(sample, target) print(loss)

4.3　nn.MSELoss

criterion = nn.MSELoss() loss = criterion(sample, target) print(loss)

4.4　nn.BCELoss

criterion = nn.BCELoss() loss = criterion(sample, target) print(loss)

4.5　nn.CrossEntropyLoss

criterion = nn.CrossEntropyLoss() loss = criterion(sample, target) print(loss)

4.6　nn.NLLLoss

criterion = F.nll_loss() loss = criterion(sample, target) print(loss)
loss=F.nll_loss(sample,target)

Nn.NLLLoss 和 nn.CrossEntropyLoss 的功能是非常相似的！通常都是用在多分类模型中，实际应用中我们一般用 NLLLoss 比较多。

4.7　nn.NLLLoss2d

*
input, (N, C, H, W)

*
target, (N, H, W)

criterion = nn.NLLLoss2d() loss = criterion(sample, target) print(loss)

30天阅读排行