torch.nn.BCELoss用法

torch.nn.BCELoss用法

1. 定义

数学公式为Loss = -w * [p * log(q) + (1-p) * log(1-q)],其中p、q分别为理论标签、实际预测值,w为权重。这里的log对应数学上的ln。

$$ Loss = -w[plog(q)+(1-p)*log(1-q)] $$

PyTorch对应函数为: torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction=‘mean’) 计算目标值和预测值之间的二进制交叉熵损失函数。

有四个可选参数:weight、size_average、reduce、reduction(好像reduce要取消了)

(1) weight必须和target的shape一致,默认为none。定义BCELoss的时候指定即可。
(2) 默认情况下 nn.BCELoss(),reduce = True,size_average = True。
(3) 如果reduce为False,size_average不起作用,返回向量形式的loss。
(4) 如果reduce为True,size_average为True,返回loss的均值,即loss.mean()。
(5) 如果reduce为True,size_average为False,返回loss的和,即loss.sum()。
(6) 如果reduction = ‘none’,直接返回向量形式的 loss。
(7) 如果reduction = ‘sum’,返回loss之和。
(8) 如果reduction = ''elementwise_mean,返回loss的平均值。
(9) 如果reduction = ''mean,返回loss的平均值

2. 验证代码

2.1

import torch
import torch.nn as nn

m = nn.Sigmoid()

loss = nn.BCELoss(size_average=False, reduce=False)
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
lossinput = m(input)
output = loss(lossinput, target)

print("输入值:")
print(lossinput)
print("输出的目标值:")
print(target)
print("计算loss的结果:")
print(output)
UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
  warnings.warn(warning.format(ret))
tensor([0.8594, 0.3397, 0.3772], grad_fn=<SigmoidBackward>)
输出的目标值:
tensor([1., 1., 1.])
计算loss的结果:
tensor([0.1515, 1.0797, 0.9749], grad_fn=<BinaryCrossEntropyBackward>)
import math

def f(p, q):
    w = 1
    return -w*(p*math.log(q)+(1-p)*math.log(1-q))

if __name__ == '__main__':
    p = [1, 1, 1]
    q = [0.8594, 0.3397, 0.3772]
    for i in range(3):
        loss = f(p[i], q[i])
        print(loss)

0.15152080764124226
1.0796924038155988
0.9749797282228346

2.2

import torch
import torch.nn as nn

m = nn.Sigmoid()

loss = nn.BCELoss(size_average=True, reduce=False)
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
lossinput = m(input)
output = loss(lossinput, target)

print("输入值:")
print(lossinput)
print("输出的目标值:")
print(target)
print("计算loss的结果:")
print(output)
 UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
  warnings.warn(warning.format(ret))
输入值:
tensor([0.3780, 0.4630, 0.2005], grad_fn=<SigmoidBackward>)
输出的目标值:
tensor([1., 0., 0.])
计算loss的结果:
tensor([0.9729, 0.6218, 0.2237], grad_fn=<BinaryCrossEntropyBackward>)

2.3

import torch
import torch.nn as nn

m = nn.Sigmoid()

loss = nn.BCELoss(size_average=True, reduce=True)
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
lossinput = m(input)
output = loss(lossinput, target)

print("输入值:")
print(lossinput)
print("输出的目标值:")
print(target)
print("计算loss的结果:")
print(output)
UserWarning: size_average and reduce args will be deprecated, please use reduction='mean' instead.
  warnings.warn(warning.format(ret))
输入值:
tensor([0.3404, 0.8976, 0.2261], grad_fn=<SigmoidBackward>)
输出的目标值:
tensor([1., 0., 1.])
计算loss的结果:
tensor(1.6144, grad_fn=<BinaryCrossEntropyBackward>)

2.4

import torch
import torch.nn as nn

m = nn.Sigmoid()

loss = nn.BCELoss(size_average=False, reduce=True)
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
lossinput = m(input)
output = loss(lossinput, target)

print("输入值:")
print(lossinput)
print("输出的目标值:")
print(target)
print("计算loss的结果:")
print(output)
UserWarning: size_average and reduce args will be deprecated, please use reduction='sum' instead.
  warnings.warn(warning.format(ret))
输入值:
tensor([0.6045, 0.6022, 0.4022], grad_fn=<SigmoidBackward>)
输出的目标值:
tensor([1., 1., 1.])
计算loss的结果:
tensor(1.9213, grad_fn=<BinaryCrossEntropyBackward>)

2.5

import torch
import torch.nn as nn

m = nn.Sigmoid()

loss = nn.BCELoss(reduction = 'none')
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
lossinput = m(input)
output = loss(lossinput, target)

print("输入值:")
print(lossinput)
print("输出的目标值:")
print(target)
print("计算loss的结果:")
print(output)
输入值:
tensor([0.2534, 0.7657, 0.6084], grad_fn=<SigmoidBackward>)
输出的目标值:
tensor([0., 0., 0.])
计算loss的结果:
tensor([0.2922, 1.4513, 0.9375], grad_fn=<BinaryCrossEntropyBackward>)

2.6

import torch
import torch.nn as nn

m = nn.Sigmoid()
weights=torch.randn(3)

loss = nn.BCELoss(weight=weights,size_average=False, reduce=False)
input = torch.randn(3, requires_grad=True)
target = torch.empty(3).random_(2)
lossinput = m(input)
output = loss(lossinput, target)

print("输入值:")
print(lossinput)
print("输出的目标值:")
print(target)
print("权重值")
print(weights)
print("计算loss的结果:")
print(output)
输入值:
tensor([0.1604, 0.4188, 0.4425], grad_fn=<SigmoidBackward>)
输出的目标值:
tensor([0., 0., 0.])
权重值
tensor([ 1.1217, -0.8692,  0.4580])
计算loss的结果:
tensor([ 0.1962, -0.4717,  0.2677], grad_fn=<BinaryCrossEntropyBackward>)
文章目录