首页 > 编程知识 正文

少女前线k2和rfb,connection holder is null

时间:2023-05-05 11:19:38 阅读:35570 作者:2681

ecv 2018: receptivefieldblocknetforaccurateandfastobjectdetection提出了一种新的特征提取模块——RFB,本文的出发点是模拟人的视觉感受野,增强网络特征提取能力

RFB效果的图像如图所示。 中央虚线的框部分为RFB结构。 RFB结构主要有两个特点。

1、由不同大小卷积核卷积层组成的多分枝结构。 这一部分可以参考Inception结构。 在Figure2的RFB结构中,也用尺寸不同的圆形表示了尺寸不同的卷积核的卷积层。

2、引入了dilated卷积层。 dilated卷积层以前被应用于分割算法深度lab,主要作用也是增加感受区,与deformable卷积有区别。

在RFB结构中,dilated卷积层的参数用不同的rate表示。 在结构中最后连接不同大小和rate卷积层的输出,达到融合不同特征的目的。 在结构中重叠展示三种不同大小和颜色的输出。 最后一列将融合的特征与人类的视觉感觉野进行了对比。 从图中可以看出,非常近。 这也是这篇文章的出发点,换句话说,就是模拟人类视觉的感觉区域进行RFB结构的设计。

下图为2种RFB结构的示意图。 (a )为射频b,总体结构借鉴了Inception的思想,主要区别在于引入3个延迟卷积层(如33conv,rate=1),也是本文增加感受区的主要方式之一(b )为射频RFB-s。 RFB-s与RFB相比主要有两个改进,一方面用33卷积层代替55卷积层,另一方面用13和31卷积层代替33卷积层。 主要目的应该是减少计算量,类似于改进Inception后期版本的Inception结构。

RFB代码的实现

classbasicconv(nn.module ) :def_init_(self,in_planes,out_planes,kernel_size,stride=1,bias kernel_size=kernel_size,stride=stride,padding=padding,dilation=dilation,groups=groups,bias=bias ) affine=True ) ifbnelsenoneself.relu=nn.relu (in place=true ) ifreluelsenonedefforward ) self, x65:x=self.conv(x ) if self.bnisnotnone : x=self.bn ) x ) if self.reluisnotnone 3360 x=self.relu (x ) rell in_planes,out_planes,stride=1,scale=0.1,visual=1) 3360super (基本反射器)。 self(_init_ ) self.scale=scale self.out _ channels=out _ planes inter _ planes=in _ planes//8see lf2* ines basicconv(2*inter_planes,2*inter_planes,Kernel relu=False ) (self.branch1=nn.sequential (basic conv )

, BasicConv(2*inter_planes, 2*inter_planes, kernel_size=3, stride=1, padding=visual+1, dilation=visual+1, relu=False) ) self.branch2 = nn.Sequential( BasicConv(in_planes, inter_planes, kernel_size=1, stride=1), BasicConv(inter_planes, (inter_planes//2)*3, kernel_size=3, stride=1, padding=1), BasicConv((inter_planes//2)*3, 2*inter_planes, kernel_size=3, stride=stride, padding=1), BasicConv(2*inter_planes, 2*inter_planes, kernel_size=3, stride=1, padding=2*visual+1, dilation=2*visual+1, relu=False) ) self.ConvLinear = BasicConv(6*inter_planes, out_planes, kernel_size=1, stride=1, relu=False) self.shortcut = BasicConv(in_planes, out_planes, kernel_size=1, stride=stride, relu=False) self.relu = nn.ReLU(inplace=False) def forward(self,x): x0 = self.branch0(x) x1 = self.branch1(x) x2 = self.branch2(x) out = torch.cat((x0,x1,x2),1) out = self.ConvLinear(out) short = self.shortcut(x) out = out*self.scale + short out = self.relu(out) return outclass BasicRFB_a(nn.Module): def __init__(self, in_planes, out_planes, stride=1, scale = 0.1): super(BasicRFB_a, self).__init__() self.scale = scale self.out_channels = out_planes inter_planes = in_planes //4 self.branch0 = nn.Sequential( BasicConv(in_planes, inter_planes, kernel_size=1, stride=1), BasicConv(inter_planes, inter_planes, kernel_size=3, stride=1, padding=1,relu=False) ) self.branch1 = nn.Sequential( BasicConv(in_planes, inter_planes, kernel_size=1, stride=1), BasicConv(inter_planes, inter_planes, kernel_size=(3,1), stride=1, padding=(1,0)), BasicConv(inter_planes, inter_planes, kernel_size=3, stride=1, padding=3, dilation=3, relu=False) ) self.branch2 = nn.Sequential( BasicConv(in_planes, inter_planes, kernel_size=1, stride=1), BasicConv(inter_planes, inter_planes, kernel_size=(1,3), stride=stride, padding=(0,1)), BasicConv(inter_planes, inter_planes, kernel_size=3, stride=1, padding=3, dilation=3, relu=False) ) self.branch3 = nn.Sequential( BasicConv(in_planes, inter_planes//2, kernel_size=1, stride=1), BasicConv(inter_planes//2, (inter_planes//4)*3, kernel_size=(1,3), stride=1, padding=(0,1)), BasicConv((inter_planes//4)*3, inter_planes, kernel_size=(3,1), stride=stride, padding=(1,0)), BasicConv(inter_planes, inter_planes, kernel_size=3, stride=1, padding=5, dilation=5, relu=False) ) self.ConvLinear = BasicConv(4*inter_planes, out_planes, kernel_size=1, stride=1, relu=False) self.shortcut = BasicConv(in_planes, out_planes, kernel_size=1, stride=stride, relu=False) self.relu = nn.ReLU(inplace=False) def forward(self,x): x0 = self.branch0(x) x1 = self.branch1(x) x2 = self.branch2(x) x3 = self.branch3(x) out = torch.cat((x0,x1,x2,x3),1) out = self.ConvLinear(out) short = self.shortcut(x) out = out*self.scale + short out = self.relu(out) return out

版权声明:该文观点仅代表作者本人。处理文章:请发送邮件至 三1五14八八95#扣扣.com 举报,一经查实,本站将立刻删除。