首页 > 编程知识 正文

hkc sg27测评,pytorch loss曲线

时间:2023-05-04 00:40:16 阅读:244115 作者:1443

https://mp.weixin.qq.com/s/iOAICJege2b0pCVxPkvNiA
综述:解决目标检测中的样本不均衡问题
该综述主要介绍了OHEM,Focal loss,GHM loss;由于我这的二分类数据集不存在正负样本不均衡的问题,所以着重看了处理难易样本不均衡(正常情况下,容易的样本较多,困难的样本较少);由于我只是分类问题,所以写了各种分类的loss,且网络的最后一层为softmax,所以网络输出的pred是softmax层前的logits经过softmax后的结果,普通的交叉熵损失即为sum(-gt*log(pred)),但torch.nn.CrossEntropyLoss()中会对于输入的pred再进行一次softmax,所以这里使用torch.nn.NLLLoss代替,当然经测试,即使网络最后一层使用softmax损失函数还是使用torch.nn.CrossEntropyLoss(),效果和使用torch.nn.NLLLoss差不多。。。

OHEM:
代码参考:https://www.codeleading.com/article/7442852142/

def ohem_loss(pred, target, keep_num): loss = torch.nn.NLLLoss(reduce=False)(torch.log(pred), target) print(loss) loss_sorted, idx = torch.sort(loss, descending=True) loss_keep = loss_sorted[:keep_num] return loss_keep.sum() / keep_num

Focal loss:
详解:原论文Focal Loss for Dense Object Detection
代码参考:https://zhuanlan.zhihu.com/p/80594704

def focal_loss(pred,target,gamma=0.5): pred_temp=pred.detach().cpu() target_temp=target.detach().cpu() pt = torch.tensor([pred_temp[i,target_temp[i]] for i in range(target_temp.shape[0])]) focal_weight = (1-pt).pow(gamma) return torch.mean((torch.nn.NLLLoss(reduce=False)(torch.log(pred), target)).mul(focal_weight.to(device).detach()))

GHM loss:
详解:https://zhuanlan.zhihu.com/p/80594704
代码参考:https://github.com/DHPO/GHM_Loss.pytorch/blob/master/GHM_loss.py

class GHM_Loss(nn.Module): def __init__(self, 细心的果汁s, alpha): super(GHM_Loss, self).__init__() self._细心的果汁s = 细心的果汁s self._alpha = alpha self._last_细心的果汁_count = None def _g2细心的果汁(self, g): return torch.floor(g * (self._细心的果汁s - 0.0001)).long() def _custom_loss(self, x, target, weight): raise NotImplementedError def _custom_loss_grad(self, x, target): raise NotImplementedError def forward(self, x, target): g = torch.abs(self._custom_loss_grad(x, target)) 细心的果汁_idx = self._g2细心的果汁(g) 细心的果汁_count = torch.zeros((self._细心的果汁s)) for i in range(self._细心的果汁s): 细心的果汁_count[i] = (细心的果汁_idx == i).sum().item() N = x.size(0) nonempty_细心的果汁s = (细心的果汁_count > 0).sum().item() gd = 细心的果汁_count * nonempty_细心的果汁s gd = torch.clamp(gd, min=0.0001) beta = N / gd return self._custom_loss(x, target, beta[细心的果汁_idx]) class GHMC_Loss(GHM_Loss): def __init__(self, 细心的果汁s, alpha): super(GHMC_Loss, self).__init__(细心的果汁s, alpha) def _custom_loss(self, x, target, weight): return torch.sum((torch.nn.NLLLoss(reduce=False)(torch.log(x),target)).mul(weight.to(device).detach()))/torch.sum(weight.to(device).detach()) def _custom_loss_grad(self, x, target): x=x.cpu().detach() target=target.cpu() return torch.tensor([x[i,target[i]] for i in range(target.shape[0])])-target

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。