首页 > 编程知识 正文

attention是否可数,attention是动词么

时间:2023-05-03 17:16:53 阅读:197661 作者:1327

CBAM: Convolutional Block Attention Module
PDF: https://arxiv.org/pdf/1807.06521.pdf
PyTorch代码: https://github.com/shanglianlm0525/PyTorch-Networks
PyTorch代码: https://github.com/shanglianlm0525/CvPytorch

1 概述

CBAM是基于卷积块的注意机制,它结合了空间注意力机制和通道注意力机制,它能显著提高图像分类和目标检测的正确率。

2 Channel Attention Module

channel attention: C×H×W ------> C×1×1

PyTorch代码:

class ChannelAttentionModule(nn.Module): def __init__(self, channel, reduction=16): super(ChannelAttentionModule, self).__init__() mid_channel = channel // reduction self.avg_pool = nn.AdaptiveAvgPool2d(1) self.max_pool = nn.AdaptiveMaxPool2d(1) self.shared_MLP = nn.Sequential( nn.Linear(in_features=channel, out_features=mid_channel), nn.ReLU(inplace=True), nn.Linear(in_features=mid_channel, out_features=channel) ) self.sigmoid = nn.Sigmoid() def forward(self, x): avgout = self.shared_MLP(self.avg_pool(x).view(x.size(0),-1)).unsqueeze(2).unsqueeze(3) maxout = self.shared_MLP(self.max_pool(x).view(x.size(0),-1)).unsqueeze(2).unsqueeze(3) return self.sigmoid(avgout + maxout) 3 Spatial Attention Module

spatial attention: C×H×W ------> 1×H×W

PyTorch代码:

class SpatialAttentionModule(nn.Module): def __init__(self): super(SpatialAttentionModule, self).__init__() self.conv2d = nn.Conv2d(in_channels=2, out_channels=1, kernel_size=7, stride=1, padding=3) self.sigmoid = nn.Sigmoid() def forward(self, x): avgout = torch.mean(x, dim=1, keepdim=True) maxout, _ = torch.max(x, dim=1, keepdim=True) out = torch.cat([avgout, maxout], dim=1) out = self.sigmoid(self.conv2d(out)) return out 4 ResBlock + CBAM

PyTorch代码:

class CBAM(nn.Module): def __init__(self, channel): super(CBAM, self).__init__() self.channel_attention = ChannelAttentionModule(channel) self.spatial_attention = SpatialAttentionModule() def forward(self, x): out = self.channel_attention(x) * x out = self.spatial_attention(out) * out return outclass ResBlock_CBAM(nn.Module): def __init__(self,in_places, places, stride=1,downsampling=False, expansion = 4): super(ResBlock_CBAM,self).__init__() self.expansion = expansion self.downsampling = downsampling self.bottleneck = nn.Sequential( nn.Conv2d(in_channels=in_places,out_channels=places,kernel_size=1,stride=1, bias=False), nn.BatchNorm2d(places), nn.ReLU(inplace=True), nn.Conv2d(in_channels=places, out_channels=places, kernel_size=3, stride=stride, padding=1, bias=False), nn.BatchNorm2d(places), nn.ReLU(inplace=True), nn.Conv2d(in_channels=places, out_channels=places*self.expansion, kernel_size=1, stride=1, bias=False), nn.BatchNorm2d(places*self.expansion), ) self.cbam = CBAM(channel=places*self.expansion) if self.downsampling: self.downsample = nn.Sequential( nn.Conv2d(in_channels=in_places, out_channels=places*self.expansion, kernel_size=1, stride=stride, bias=False), nn.BatchNorm2d(places*self.expansion) ) self.relu = nn.ReLU(inplace=True) def forward(self, x): residual = x out = self.bottleneck(x) out = self.cbam(out) if self.downsampling: residual = self.downsample(x) out += residual out = self.relu(out) return out 5 Ablation 5-1 Channel attention


使用avgpool和maxpool可以更好的降低错误率,大概有1-2%的提升,同时使用能提供更加精细的信息,有利于提升模型的表现

5-2 Spatial attention


空间注意力机制参数有avg, max组成, 此外kernel size=7时效果最好

5-3 Arrangement of the channel and spatial attention

先channel attention然后spatial attention效果(最终的CBAM模块组成) > 先spatial attention然后channel attention 效果 > 并行channel attention和spatial attention

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。