当前位置: 首页 > news >正文

网络营销模式的建议北京优化推广

网络营销模式的建议,北京优化推广,深圳知名网站建设价格,迅博威网站建设【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解 文章目录 【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解前言Inception-ResNet讲解Inception-ResNet-V1Inception-ResNet-V2残差模块的缩放(Scaling of the Residuals)Inception-…【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解 文章目录 【图像分类】【深度学习】【Pytorch版本】Inception-ResNet模型算法详解前言Inception-ResNet讲解Inception-ResNet-V1Inception-ResNet-V2残差模块的缩放(Scaling of the Residuals)Inception-ResNet的总体模型结构 GoogLeNet(Inception-ResNet) Pytorch代码Inception-ResNet-V1Inception-ResNet-V2 完整代码Inception-ResNet-V1Inception-ResNet-V2 总结 前言 GoogLeNet(Inception-ResNet)是由谷歌的Szegedy, Christian等人在《Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning【AAAI-2017】》【论文地址】一文中提出的改进模型受启发于ResNet【参考】在深度网络上较好的表现影响论文将残差连接加入到Inception结构中形成2个Inception-ResNet版本的网络它将残差连接取代原本Inception块中池化层部分并将拼接变成了求和相加提升了Inception的训练速度。 因为InceptionV4、Inception-Resnet-v1和Inception-Resnet-v2同出自一篇论文大部分读者对InceptionV4存在误解认为它是Inception模块与残差学习的结合其实InceptionV4没有使用残差学习的思想它基本延续了Inception v2/v3的结构只有Inception-Resnet-v1和Inception-Resnet-v2才是Inception模块与残差学习的结合产物。 Inception-ResNet讲解 Inception-ResNet的核心思想是将Inception模块和ResNet模块进行融合以利用它们各自的优点。Inception模块通过并行多个不同大小的卷积核来捕捉多尺度的特征而ResNet模块通过残差连接解决了深层网络中的梯度消失和梯度爆炸问题有助于更好地训练深层模型。Inception-ResNet使用了与InceptionV4【参考】类似的Inception模块并在其中引入了ResNet的残差连接。这样网络中的每个Inception模块都包含了两个分支一个是常规的Inception结构另一个是包含残差连接的Inception结构。这种设计使得模型可以更好地学习特征表示并且在训练过程中可以更有效地传播梯度。 Inception-ResNet-V1 Inception-ResNet-v1一种和InceptionV3【参考】具有相同计算损耗的结构。 Stem结构 Inception-ResNet-V1的Stem结构类似于此前的InceptionV3网络中Inception结构组之前的网络层。 所有卷积中没有标记为V表示填充方式为SAME Padding输入和输出维度一致标记为V表示填充方式为VALID Padding输出维度视具体情况而定。 Inception-resnet-A结构 InceptionV4网络中Inception-A结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet结构残差连接代替了Inception中的池化层并用残差连接相加操作取代了原Inception块中的拼接操作。 Inception-resnet-B结构 InceptionV4网络中Inception-B结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet-C结构 InceptionV4网络中Inception-C结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Redution-A结构 与InceptionV4网络中Redution-A结构一致区别在于卷积核的个数。 k和l表示卷积个数不同网络结构的redution-A结构k和l是不同的。 Redution-B结构 . Inception-ResNet-V2 Inception-ResNet-v2这是一种和InceptionV4具有相同计算损耗的结构但是训练速度要比纯Inception-v4要快。 Inception-ResNet-v2的整体框架和Inception-ResNet-v1的一致除了Inception-ResNet-v2的stem结构与Inception V4的相同其他的的结构Inception-ResNet-v2与Inception-ResNet-v1的类似只不过卷积的个数Inception-ResNet-v2数量更多。 Stem结构 Inception-ResNet-v2的stem结构与Inception V4的相同。 Inception-resnet-A结构 InceptionV4网络中Inception-A结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet-B结构 InceptionV4网络中Inception-B结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Inception-resnet-C结构 InceptionV4网络中Inception-C结构的变体1×1卷积的目的是为了保持主分支与shortcut分支的特征图形状保持完全一致。 Redution-A结构 与InceptionV4网络中Redution-A结构一致区别在于卷积核的个数。 k和l表示卷积个数不同网络结构的redution-A结构k和l是不同的。 Redution-B结构 残差模块的缩放(Scaling of the Residuals) 如果单个网络层卷积核数量过多(超过1000)残差网络开始出现不稳定网络会在训练过程早期便会开始失效—经过几万次训练后平均池化层之前的层开始只输出0。降低学习率、增加额外的BN层都无法避免这种状况。因此在将shortcut分支加到当前残差块的输出之前对残差块的输出进行放缩能够稳定训练 通常将残差放缩因子定在0.1到0.3之间去缩放残差块输出。即使缩放并不是完全必须的它似乎并不会影响最终的准确率但是放缩能有益于训练的稳定性。 Inception-ResNet的总体模型结构 下图是原论文给出的关于 Inception-ResNet-V1模型结构的详细示意图 下图是原论文给出的关于 Inception-ResNet-V2模型结构的详细示意图 读者注意了,原始论文标注的 Inception-ResNet-V2通道数有一部分是错的写代码时候对应不上。 两个版本的总体结构相同具体的Stem、Inception块、Redution块则稍微不同。 Inception-ResNet-V1和 Inception-ResNet-V2在图像分类中分为两部分backbone部分 主要由 Inception-resnet模块、Stem模块和池化层(汇聚层)组成分类器部分由全连接层组成。 GoogLeNet(Inception-ResNet) Pytorch代码 Inception-ResNet-V1 卷积层组 卷积层BN层激活函数 # 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return xStem模块 卷积层组池化层 # Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3x3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)# conv1*1(80)self.conv5 BasicConv2d(64, 80, kernel_size1)# conv3*3(192 valid)self.conv6 BasicConv2d(80, 192, kernel_size1)# conv3*3(256 stride2 valid)self.conv7 BasicConv2d(192, 256, kernel_size3, stride2)def forward(self, x):x self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x self.conv7(self.conv6(self.conv5(x)))return xInception_ResNet-A模块 卷积层组池化层 # Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(32)conv3*3(32)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(256)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-B模块 卷积层组池化层 # Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(128)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(128)conv1*7(128)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(896)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-C模块 卷积层组池化层 # Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(192)conv3*1(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(1792)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_resredutionA模块 卷积层组池化层 # redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)redutionB模块 卷积层组池化层 # redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(256 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(256)conv3x3(256 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)Inception-ResNet-V2 Inception-ResNet-V2除了Stem其他模块在结构上与Inception-ResNet-V1一致。 卷积层组 卷积层BN层激活函数 # 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return xStem模块 卷积层组池化层 # Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3*3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid) conv3*3(96 stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)self.conv4 BasicConv2d(64, 96, kernel_size3, stride2)# conv1*1(64)conv3*3(96 valid)self.conv5_1_1 BasicConv2d(160, 64, kernel_size1)self.conv5_1_2 BasicConv2d(64, 96, kernel_size3)# conv1*1(64)conv7*1(64)conv1*7(64)conv3*3(96 valid)self.conv5_2_1 BasicConv2d(160, 64, kernel_size1)self.conv5_2_2 BasicConv2d(64, 64, kernel_size(7, 1), padding(3, 0))self.conv5_2_3 BasicConv2d(64, 64, kernel_size(1, 7), padding(0, 3))self.conv5_2_4 BasicConv2d(64, 96, kernel_size3)# conv3*3(192 valid) maxpool3*3(stride2 valid)self.conv6 BasicConv2d(192, 192, kernel_size3, stride2)self.maxpool6 nn.MaxPool2d(kernel_size3, stride2)def forward(self, x):x1_1 self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x1_2 self.conv4(self.conv3(self.conv2(self.conv1(x))))x1 torch.cat([x1_1, x1_2], 1)x2_1 self.conv5_1_2(self.conv5_1_1(x1))x2_2 self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))x2 torch.cat([x2_1, x2_2], 1)x3_1 self.conv6(x2)x3_2 self.maxpool6(x2)x3 torch.cat([x3_1, x3_2], 1)return x3Inception_ResNet-A模块 卷积层组池化层 # Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(48)conv3*3(64)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(384)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-B模块 卷积层组池化层 # Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(160)conv1*7(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(1154)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)Inception_ResNet-C模块 卷积层组池化层 # Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(224)conv3*1(256)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(2048)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_resredutionA模块 卷积层组池化层 # redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)redutionB模块 卷积层组池化层 # redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(288 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(288)conv3x3(320 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)完整代码 Inception-ResNet的输入图像尺寸是299×299 Inception-ResNet-V1 import torch import torch.nn as nn from torchsummary import summary# 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return x# Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3x3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)# conv1*1(80)self.conv5 BasicConv2d(64, 80, kernel_size1)# conv3*3(192 valid)self.conv6 BasicConv2d(80, 192, kernel_size1)# conv3*3(256 stride2 valid)self.conv7 BasicConv2d(192, 256, kernel_size3, stride2)def forward(self, x):x self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x self.conv7(self.conv6(self.conv5(x)))return x# Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(32)conv3*3(32)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(256)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(128)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(128)conv1*7(128)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(896)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(192)conv3*1(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(1792)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_res# redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)# redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(256 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(256)conv3x3(256 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)class Inception_ResNetv1(nn.Module):def __init__(self, num_classes 1000, k192, l192, m256, n384):super(Inception_ResNetv1, self).__init__()blocks []blocks.append(Stem(3))for i in range(5):blocks.append(Inception_ResNet_A(256,32, 32, 32, 32, 32, 32, 256, 0.17))blocks.append(redutionA(256, k, l, m, n))for i in range(10):blocks.append(Inception_ResNet_B(896, 128, 128, 128, 128, 896, 0.10))blocks.append(redutionB(896,256, 384, 256, 256, 256))for i in range(4):blocks.append(Inception_ResNet_C(1792,192, 192, 192, 192, 1792, 0.20))blocks.append(Inception_ResNet_C(1792, 192, 192, 192, 192, 1792, activationFalse))self.features nn.Sequential(*blocks)self.conv BasicConv2d(1792, 1536, 1)self.global_average_pooling nn.AdaptiveAvgPool2d((1, 1))self.dropout nn.Dropout(0.8)self.linear nn.Linear(1536, num_classes)def forward(self, x):x self.features(x)x self.conv(x)x self.global_average_pooling(x)x x.view(x.size(0), -1)x self.dropout(x)x self.linear(x)return xif __name__ __main__:device torch.device(cuda:0 if torch.cuda.is_available() else cpu)model Inception_ResNetv1().to(device)summary(model, input_size(3, 229, 229))summary可以打印网络结构和参数方便查看搭建好的网络结构。 Inception-ResNet-V2 import torch import torch.nn as nn from torchsummary import summary# 卷积组: Conv2dBNReLU class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride1, padding0):super(BasicConv2d, self).__init__()self.conv nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)self.bn nn.BatchNorm2d(out_channels)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x self.conv(x)x self.bn(x)x self.relu(x)return x# Stem:BasicConv2dMaxPool2d class Stem(nn.Module):def __init__(self, in_channels):super(Stem, self).__init__()# conv3*3(32 stride2 valid)self.conv1 BasicConv2d(in_channels, 32, kernel_size3, stride2)# conv3*3(32 valid)self.conv2 BasicConv2d(32, 32, kernel_size3)# conv3*3(64)self.conv3 BasicConv2d(32, 64, kernel_size3, padding1)# maxpool3*3(stride2 valid) conv3*3(96 stride2 valid)self.maxpool4 nn.MaxPool2d(kernel_size3, stride2)self.conv4 BasicConv2d(64, 96, kernel_size3, stride2)# conv1*1(64)conv3*3(96 valid)self.conv5_1_1 BasicConv2d(160, 64, kernel_size1)self.conv5_1_2 BasicConv2d(64, 96, kernel_size3)# conv1*1(64)conv7*1(64)conv1*7(64)conv3*3(96 valid)self.conv5_2_1 BasicConv2d(160, 64, kernel_size1)self.conv5_2_2 BasicConv2d(64, 64, kernel_size(7, 1), padding(3, 0))self.conv5_2_3 BasicConv2d(64, 64, kernel_size(1, 7), padding(0, 3))self.conv5_2_4 BasicConv2d(64, 96, kernel_size3)# conv3*3(192 valid) maxpool3*3(stride2 valid)self.conv6 BasicConv2d(192, 192, kernel_size3, stride2)self.maxpool6 nn.MaxPool2d(kernel_size3, stride2)def forward(self, x):x1_1 self.maxpool4(self.conv3(self.conv2(self.conv1(x))))x1_2 self.conv4(self.conv3(self.conv2(self.conv1(x))))x1 torch.cat([x1_1, x1_2], 1)x2_1 self.conv5_1_2(self.conv5_1_1(x1))x2_2 self.conv5_2_4(self.conv5_2_3(self.conv5_2_2(self.conv5_2_1(x1))))x2 torch.cat([x2_1, x2_2], 1)x3_1 self.conv6(x2)x3_2 self.maxpool6(x2)x3 torch.cat([x3_1, x3_2], 1)return x3# Inception_ResNet_A:BasicConv2dMaxPool2d class Inception_ResNet_A(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0):super(Inception_ResNet_A, self).__init__()# 缩减指数self.scale scale# conv1*1(32)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(32)conv3*3(32)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3red, 1),BasicConv2d(ch3x3red, ch3x3, 3, stride1, padding1))# conv1*1(32)conv3*3(48)conv3*3(64)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, 3, stride1, padding1),BasicConv2d(ch3x3X2_1, ch3x3X2_2, 3, stride1, padding1))# conv1*1(384)self.conv BasicConv2d(ch1x1ch3x3ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)# 拼接x_res torch.cat((x0, x1, x2), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_B:BasicConv2dMaxPool2d class Inception_ResNet_B(nn.Module):def __init__(self, in_channels, ch1x1, ch_red, ch_1, ch_2, ch1x1ext, scale1.0):super(Inception_ResNet_B, self).__init__()# 缩减指数self.scale scale# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(128)conv1*7(160)conv1*7(192)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch_red, 1),BasicConv2d(ch_red, ch_1, (1, 7), stride1, padding(0, 3)),BasicConv2d(ch_1, ch_2, (7, 1), stride1, padding(3, 0)))# conv1*1(1154)self.conv BasicConv2d(ch1x1ch_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)return self.relu(x self.scale * x_res)# Inception_ResNet_C:BasicConv2dMaxPool2d class Inception_ResNet_C(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3redX2, ch3x3X2_1, ch3x3X2_2, ch1x1ext, scale1.0, activationTrue):super(Inception_ResNet_C, self).__init__()# 缩减指数self.scale scale# 是否激活self.activation activation# conv1*1(192)self.branch_0 BasicConv2d(in_channels, ch1x1, 1)# conv1*1(192)conv1*3(224)conv3*1(256)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch3x3redX2, 1),BasicConv2d(ch3x3redX2, ch3x3X2_1, (1, 3), stride1, padding(0, 1)),BasicConv2d(ch3x3X2_1, ch3x3X2_2, (3, 1), stride1, padding(1, 0)))# conv1*1(2048)self.conv BasicConv2d(ch1x1ch3x3X2_2, ch1x1ext, 1)self.relu nn.ReLU(inplaceTrue)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)# 拼接x_res torch.cat((x0, x1), dim1)x_res self.conv(x_res)if self.activation:return self.relu(x self.scale * x_res)return x self.scale * x_res# redutionA:BasicConv2dMaxPool2d class redutionA(nn.Module):def __init__(self, in_channels, k, l, m, n):super(redutionA, self).__init__()# conv3*3(n stride2 valid)self.branch1 nn.Sequential(BasicConv2d(in_channels, n, kernel_size3, stride2),)# conv1*1(k)conv3*3(l)conv3*3(m stride2 valid)self.branch2 nn.Sequential(BasicConv2d(in_channels, k, kernel_size1),BasicConv2d(k, l, kernel_size3, padding1),BasicConv2d(l, m, kernel_size3, stride2))# maxpool3*3(stride2 valid)self.branch3 nn.Sequential(nn.MaxPool2d(kernel_size3, stride2))def forward(self, x):branch1 self.branch1(x)branch2 self.branch2(x)branch3 self.branch3(x)# 拼接outputs [branch1, branch2, branch3]return torch.cat(outputs, 1)# redutionB:BasicConv2dMaxPool2d class redutionB(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3_1, ch3x3_2, ch3x3_3, ch3x3_4):super(redutionB, self).__init__()# conv1*1(256)conv3x3(384 stride2 valid)self.branch_0 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_1, 3, stride2, padding0))# conv1*1(256)conv3x3(288 stride2 valid)self.branch_1 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_2, 3, stride2, padding0),)# conv1*1(256)conv3x3(288)conv3x3(320 stride2 valid)self.branch_2 nn.Sequential(BasicConv2d(in_channels, ch1x1, 1),BasicConv2d(ch1x1, ch3x3_3, 3, stride1, padding1),BasicConv2d(ch3x3_3, ch3x3_4, 3, stride2, padding0))# maxpool3*3(stride2 valid)self.branch_3 nn.MaxPool2d(3, stride2, padding0)def forward(self, x):x0 self.branch_0(x)x1 self.branch_1(x)x2 self.branch_2(x)x3 self.branch_3(x)return torch.cat((x0, x1, x2, x3), dim1)class Inception_ResNetv2(nn.Module):def __init__(self, num_classes 1000, k256, l256, m384, n384):super(Inception_ResNetv2, self).__init__()blocks []blocks.append(Stem(3))for i in range(5):blocks.append(Inception_ResNet_A(384,32, 32, 32, 32, 48, 64, 384, 0.17))blocks.append(redutionA(384, k, l, m, n))for i in range(10):blocks.append(Inception_ResNet_B(1152, 192, 128, 160, 192, 1152, 0.10))blocks.append(redutionB(1152, 256, 384, 288, 288, 320))for i in range(4):blocks.append(Inception_ResNet_C(2144,192, 192, 224, 256, 2144, 0.20))blocks.append(Inception_ResNet_C(2144, 192, 192, 224, 256, 2144, activationFalse))self.features nn.Sequential(*blocks)self.conv BasicConv2d(2144, 1536, 1)self.global_average_pooling nn.AdaptiveAvgPool2d((1, 1))self.dropout nn.Dropout(0.8)self.linear nn.Linear(1536, num_classes)def forward(self, x):x self.features(x)x self.conv(x)x self.global_average_pooling(x)x x.view(x.size(0), -1)x self.dropout(x)x self.linear(x)return xif __name__ __main__:device torch.device(cuda:0 if torch.cuda.is_available() else cpu)model Inception_ResNetv2().to(device)summary(model, input_size(3, 229, 229))summary可以打印网络结构和参数方便查看搭建好的网络结构。 总结 尽可能简单、详细的介绍了Inception-ResNet将Inception和ResNet结合的作用和过程讲解了Inception-ResNet模型的结构和pytorch代码。
http://www.tj-hxxt.cn/news/218670.html

相关文章:

  • 南通网站推广公司建设牌安全带官方网站
  • 手机自助建站平台国外做电商网站
  • 网站建设外包注意什么福田祥菱m2柴油版
  • 二级网站怎么建网站不备案可以访问吗
  • 国外高大上设计网站临沂网站建设昂牛网络
  • 建筑设计公司名称起名网站站seo教程
  • 惠州网站制作策划网络服务提供者接到权利人的通知后
  • 免费二级网站星辰wordpress主题
  • 赣州市开发区建设局网站山东省建设管理局网站
  • 枣强网站建设代理京东云免费建wordpress
  • 网站结构分析怎么写做一个公司的网站应做哪些准备工作
  • 电子商务网站例网站建设小程序湖南
  • 好的学习网站打广告个人站长和企业网站
  • 创建企业网站经过哪些步骤网站建设属于什么会计科目
  • 珠海找工作哪个网站好wordpress自适应 分页
  • 什么语言开发网站建立网站赚钱 优帮云
  • 怎样在门户网站做 推广门户网站建设管理工作的意见
  • phpcms 中英文网站同城版网站建设
  • 更新网站要怎么做呢广州高端模板网站
  • 免费建站微信wordpress设置了固定连接打不开
  • 住房城乡建设网站wordpress添加面包屑导航
  • 网站开发有哪些服务器个人网站可以做淘宝客
  • 做网站时随便弄上去的文章怎么删掉网页设计与制作轮播图教程
  • 网站地图怎么使用wordpress vs php的区别
  • 家居企业网站建设如何西安手机网站制作
  • 手机网站cms 开源太原关键词排名优化
  • 免费开发网站徐州百姓网发布信息
  • 深网网站安卓优化大师
  • 建设银行校招网站入口网站标题设计
  • 网站建设期末总结wordpress我的世界