首页 > 编程知识 正文

matlab kl散度,kl散度 损失函数

时间:2023-05-03 07:35:06 阅读:232127 作者:4869

交叉熵(Cross Entropy)和KL散度(Kullback–Leibler Divergence)是机器学习中极其常用的两个指标,用来衡量两个概率分布的相似度,常被作为Loss Function。本文给出熵、相对熵、交叉熵的定义,用python实现算法并与pytorch中对应的函数结果对比验证。

熵(Entropy)

此处为方便讨论及与后续实例呼应,所有随机变量均为离散随机变量。

定义随机变量x在概率分布p的熵:

代码与实例1

2

3

4

5

6

7

8

9

10

11import torch

import numpy as np

from torch.distributions import Categorical, kl

# Entropy

p = [0.1, 0.2, 0.3, 0.4]

Hp = -sum([p[i] * np.log(p[i]) for i in range(len(p))])

print (f"H(p) = {Hp}")

dist_p = Categorical(torch.tensor(p))

print (f"Torch H(p) = {dist_p.entropy().item()}")

结果:

1

2H(p) = 1.2798542258336676

Torch H(p) = 1.2798542976379395

相对熵(Relative Entropy)

相对熵(Relative Entropy),也称KL散度 (Kullback–Leibler divergence)。

p(x),q(x)为随机变量x的两个概率分布,定义p对q的相对熵(KL散度)为:

KL散度在p(x)和q(x)相同时取到最小值0,两个概率分布越相似,则KL散度越小。

注意,$D(p||q) != D(q||p)$,也不满足三角不等式。

假设p(x)是随机变量的真实分布,q(x)是模型预测的分布,则可以用KL散度作为分类问题的Loss Function,通过训练使预测分布接近于真实分布。

代码与实例1

2

3

4

5

6

7

8

9

10

11

12

13import torch

import numpy as np

from torch.distributions import Categorical, kl

# KL divergence

p = [0.1, 0.2, 0.3, 0.4]

q = [0.1, 0.1, 0.7, 0.1]

Dpq = sum([p[i] * np.log(p[i] / q[i]) for i in range(len(p))])

print (f"D(p, q) = {Dpq}")

dist_p = Categorical(torch.tensor(p))

dist_q = Categorical(torch.tensor(q))

print (f"Torch D(p, q) = {kl.kl_divergence(dist_p, dist_q)}")

结果:

1

2D(p, q) = 0.43895782244378423

Torch D(p, q) = 0.4389578104019165

交叉熵(Cross Entropy)

对KL散度进行变形:

定义p对q的交叉熵为:

于是KL散度变为:

在分类问题中,随机变量的真实分布p(x)是确定的,于是H(p)也是确定的,相当于一个常数。因此,优化KL散度与优化交叉熵等价,这也是为什么用交叉熵作为分类问题损失函数的原因。

代码与实例1

2

3

4

5

6

7

8

9

10import torch

import numpy as np

from torch.distributions import Categorical, kl

# Cross entropy

p = [0.1, 0.2, 0.3, 0.4]

q = [0.1, 0.1, 0.7, 0.1]

Hpq = -sum([p[i] * np.log(q[i]) for i in range(len(p))])

print (f"H(p, q) = {Hpq}")

结果:

1H(p, q) = 1.7188120482774516

验证了上面的公式 $D(p||q)=H(p, q) - H(p)$。

交叉熵损失函数1torch.nn.CrossEntropyLoss(weight: Optional[torch.Tensor] = None, size_average=None, ignore_index: int = -100, reduce=None, reduction: str = 'mean')

输入为两部分Input和Target,其中Input的Shape是(batch_size, C),CrossEntropyLoss是softmax和NLLLoss的结合体,Matrix每行的值是模型输出的各分类的概率值,无须经过softmax;Target的Shape是(batch_size),取值是true label对应的类的index。

输出是一个tensor。

听起来复杂,举个例子:

一个4分类问题,假设模型输出的结果是p = [1,2,3,4],true label是1,即第二类:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17import torch

import numpy as np

from torch.distributions import Categorical, kl

from torch.nn import CrossEntropyLoss

# Cross entropy loss

p = [1, 2, 3, 4]

q = [1] # [0, 1, 0, 0] = torch.nn.functional.one_hot(torch.tensor(q), len(p))

celoss = -p[q[0]] + np.log(sum([np.exp(i) for i in p]))

print (f"Cross Entropy Loss: {celoss}")

loss = CrossEntropyLoss()

tensor_p = torch.FloatTensor(p).unsqueeze(0)

tensor_q = torch.tensor(q)

output = loss(tensor_p, tensor_q)

print (f"Torch Cross Entropy Loss: {output.item()}")

结果:

1

2Cross Entropy Loss: 2.4401896985611957

Torch Cross Entropy Loss: 2.4401895999908447

解释一下(其中tl是true label,上例取值1),参考softmax函数的定义,把q变成one-hot encoding后为[0, 1, 0, 0],计算p与q的交叉熵:

即:

简洁起见,本文仅从公式和实例的角度直观地解释了熵,KL散度和交叉熵的几个概念,如果对几个定义的原理和在信息论中的解释有兴趣,可以参考如下几篇文章:

完整代码1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42import torch

import numpy as np

from torch.distributions import Categorical, kl

from torch.nn import CrossEntropyLoss

# Entropy

p = [0.1, 0.2, 0.3, 0.4]

Hp = -sum([p[i] * np.log(p[i]) for i in range(len(p))])

print (f"H(p) = {Hp}")

dist_p = Categorical(torch.tensor(p))

print (f"Torch H(p) = {dist_p.entropy().item()}")

# KL divergence

p = [0.1, 0.2, 0.3, 0.4]

q = [0.1, 0.1, 0.7, 0.1]

Dpq = sum([p[i] * np.log(p[i] / q[i]) for i in range(len(p))])

print (f"D(p, q) = {Dpq}")

dist_p = Categorical(torch.tensor(p))

dist_q = Categorical(torch.tensor(q))

print (f"Torch D(p, q) = {kl.kl_divergence(dist_p, dist_q)}")

# Cross entropy

p = [0.1, 0.2, 0.3, 0.4]

q = [0.1, 0.1, 0.7, 0.1]

Hpq = -sum([p[i] * np.log(q[i]) for i in range(len(p))])

print (f"H(p, q) = {Hpq}")

# Cross entropy loss

p = [1, 2, 3, 4]

q = [1] # [0, 1, 0, 0] = torch.nn.functional.one_hot(torch.tensor(q), len(p))

celoss = -p[q[0]] + np.log(sum([np.exp(i) for i in p]))

print (f"Cross Entropy Loss: {celoss}")

loss = CrossEntropyLoss()

tensor_p = torch.FloatTensor(p).unsqueeze(0)

tensor_q = torch.tensor(q)

output = loss(tensor_p, tensor_q)

print (f"Torch Cross Entropy Loss: {output.item()}")

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。