当前位置: 首页 > news >正文

自己网站建设要维护又顺又旺的公司名字大全

自己网站建设要维护,又顺又旺的公司名字大全,电子商务网站建设中应注意哪些问题,制作免费网站文章目录 前言一、数据集介绍二、前期工作三、数据集读取四、构建CA注意力模块五、构建模型六、开始训练 前言 Google公司继MobileNetV2之后#xff0c;在2019年发表了它的改进版本MobileNetV3。而MobileNetV3共有两个版本#xff0c;分别是MobileNetV3-Large和MobileNetV2-… 文章目录 前言一、数据集介绍二、前期工作三、数据集读取四、构建CA注意力模块五、构建模型六、开始训练 前言 Google公司继MobileNetV2之后在2019年发表了它的改进版本MobileNetV3。而MobileNetV3共有两个版本分别是MobileNetV3-Large和MobileNetV2-Small。改进后的MobileNetV3在ImageNet数据集的分类精度上它的MobileNetV3-Large版本相较于MobileNetV2提升了大概3.2%的精度同时延迟减少了20%而MobileNetV3-Small则提升了6.6%的精度减少了大概23%的延迟。 今天我们用MobileNetV3来进行肺炎的识别同时我们用CA注意力机制替换了原模型中的SE注意力模块。 我的环境 基础环境python3.7编译器jupyter notebook深度学习框架pytorch 一、数据集介绍 ChestXRay2017数据集共包含5856张胸腔X射线透视图诊断结果即分类标签主要分为正常和肺炎其中肺炎又可以细分为细菌性肺炎和病毒性肺炎。 胸腔X射线图像选自广州市妇幼保健中心的1至5岁儿科患者的回顾性研究。所有胸腔X射线成像都是患者常规临床护理的一部分。 为了分析胸腔X射线图像首先对所有胸腔X光片进行了筛查去除所有低质量或不可读的扫描从而保证图片质量。然后由两名专业医师对图像的诊断进行分级最后为降低图像诊断错误 还由第三位专家检查了测试集。 主要分为train和test两大子文件夹分别用于模型的训练和测试。在每个子文件内又分为了NORMAL(正常)和PNEUMONIA(肺炎)两大类。 在PNEUMONIA文件夹内含有细菌性和病毒性肺炎两类可以通过图片的命名格式进行判别。 二、前期工作 from torch import nn import torch.utils.data as Data from torchvision.transforms import transforms import torchvision import torchsummary# 设置device device torch.device(cuda if torch.cuda.is_available() else cpu)三、数据集读取 data_transform {train: transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),val: transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}train_datatorchvision.datasets.ImageFolder(rootrChestXRay2017/chest_xray/train,transformdata_transform[train]) train_dataloaderData.DataLoader(train_data,batch_size48,shuffleTrue)test_datatorchvision.datasets.ImageFolder(rootrChestXRay2017/chest_xray/test,transformdata_transform[val]) test_dataloaderData.DataLoader(test_data,batch_size48,shuffleTrue)四、构建CA注意力模块 我们都知道注意力机制在各种计算机视觉任务中都是有帮助如图像分类和图像分割。其中最为经典和被熟知的便是SENet它通过简单地squeeze每个2维特征图进而有效地构建通道之间的相互依赖关系。 SE Block虽然近2年来被广泛使用然而它只考虑通过建立通道之间的关系来重新衡量每个通道的重要性而忽略了位置信息但是位置信息对于生成空间选择性attention maps是很重要的。因此就有人引入了一种新的注意块它不仅仅考虑了通道间的关系还考虑了特征空间的位置信息即CACoordinate Attention注意力机制。 class h_swish(nn.Module):def __init__(self, inplaceTrue):super(h_swish, self).__init__()self.relu6 nn.ReLU6()def forward(self, x):return x * self.relu6(x 3) / 6class CoordAtt(nn.Module):def __init__(self, inp, oup, groups32):super(CoordAtt, self).__init__()self.pool_h nn.AdaptiveAvgPool2d((None, 1))self.pool_w nn.AdaptiveAvgPool2d((1, None))mip max(8, inp // groups)self.conv1 nn.Conv2d(inp, mip, kernel_size1, stride1, padding0)self.bn1 nn.BatchNorm2d(mip)self.conv2 nn.Conv2d(mip, oup, kernel_size1, stride1, padding0)self.conv3 nn.Conv2d(mip, oup, kernel_size1, stride1, padding0)self.relu h_swish()def forward(self, x):identity xn,c,h,w x.size()x_h self.pool_h(x)x_w self.pool_w(x).permute(0, 1, 3, 2)y torch.cat([x_h, x_w], dim2)y self.conv1(y)y self.bn1(y)y self.relu(y)x_h, x_w torch.split(y, [h, w], dim2)x_w x_w.permute(0, 1, 3, 2)x_h self.conv2(x_h).sigmoid()x_w self.conv3(x_w).sigmoid()x_h x_h.expand(-1, -1, h, w)x_w x_w.expand(-1, -1, h, w)y identity * x_w * x_h# yx_w * x_hreturn yclass CA_SA(nn.Module):def __init__(self,inchannel,outchannel):super(CA_SA, self).__init__()self.CACoordAtt(inchannel,outchannel)self.SASpatial_Attention_Module(7)def forward(self,x):yself.CA(x)zself.SA(x)return x*y*z 五、构建模型 import torch.nn as nn import torch import torchsummarydevice torch.device(cuda if torch.cuda.is_available() else cpu)# 定义h-swith激活函数 class HardSwish(nn.Module):def __init__(self, inplaceTrue):super(HardSwish, self).__init__()self.relu6 nn.ReLU6()def forward(self, x):return x * self.relu6(x 3) / 6# DW卷积 def ConvBNActivation(in_channels, out_channels, kernel_size, stride, activate):# 通过设置padding达到当stride2时hw减半的效果。此时不与kernel_size有关所实现的公式为: padding(kernel_size-1)//2# 当kernel_size3,padding1时: stride2 hw减半, stride1 hw不变# 当kernel_size5,padding2时: stride2 hw减半, stride1 hw不变# 从而达到了使用 stride 来控制hw的效果 不用去关心kernel_size的大小控制单一变量return nn.Sequential(nn.Conv2d(in_channelsin_channels, out_channelsout_channels, kernel_sizekernel_size, stridestride,padding(kernel_size - 1) // 2, groupsin_channels),nn.BatchNorm2d(out_channels),nn.ReLU6() if activate relu else HardSwish())class Inceptionnext(nn.Module):def __init__(self, in_channels, out_channels, kernel_size, stride, activate):super(Inceptionnext, self).__init__()gc int(in_channels * 1 / 4) # channel number of a convolution branch# self.dwconv_hw nn.Conv2D(gc, gc, kernel_size,stridestride,padding(kernel_size-1)//2,groupsgc)self.dwconv_hw1 nn.Conv2d(gc, gc, (1, kernel_size), stridestride, padding(0, (kernel_size - 1) // 2),groupsgc)self.dwconv_hw2 nn.Conv2d(gc, gc, (kernel_size, 1), stridestride, padding((kernel_size - 1) // 2, 0),groupsgc)self.dwconv_hw nn.Sequential(nn.Conv2d(gc, gc, (1, kernel_size), stridestride, padding(0, (kernel_size - 1) // 2), groupsgc),nn.Conv2d(gc, gc, (kernel_size, 1), stridestride, padding((kernel_size - 1) // 2, 0), groupsgc))# self.dwconv_hw nn.Sequential(# nn.Conv2d(gc,gc//2,kernel_size1,stride1),# nn.Conv2d(gc//2, gc//2, (1, kernel_size), stridestride, padding(0, (kernel_size - 1) // 2), groupsgc//2),# nn.Conv2d(gc//2, gc//2, (kernel_size, 1), stridestride, padding((kernel_size - 1) // 2, 0), groupsgc//2)# )self.dwconv_w nn.Conv2d(gc, gc, kernel_size(1, 11), stridestride, padding(0, 11 // 2), groupsgc)self.dwconv_h nn.Conv2d(gc, gc, kernel_size(11, 1), stridestride, padding(11 // 2, 0), groupsgc)self.batch2d nn.BatchNorm2d(out_channels)self.activate nn.ReLU6() if activate relu else HardSwish()self.split_indexes (gc, gc, gc, in_channels - 3 * gc)self.cheapnn.Sequential(nn.Conv2d(gc // 2, gc // 2, (1, 3), stridestride, padding(0, (3 - 1) // 2),groupsgc//2),nn.Conv2d(gc // 2, gc // 2, (3, 1), stridestride, padding((3 - 1) // 2, 0), groupsgc//2))def forward(self, x):# B, C, H, W x.shapex_hw, x_w, x_h, x_id torch.split(x, self.split_indexes, dim1)x torch.cat((self.dwconv_hw(x_hw),self.dwconv_w(x_w),self.dwconv_h(x_h),x_id),dim1)# x torch.cat(# (torch.cat((self.dwconv_hw(x_hw),self.cheap(self.dwconv_hw(x_hw))),dim1),# self.dwconv_w(x_w),# self.dwconv_h(x_h),# x_id),# dim1)x self.batch2d(x)x self.activate(x)return x# PW卷积(接全连接层) def Conv1x1BN(in_channels, out_channels):return nn.Sequential(nn.Conv2d(in_channelsin_channels, out_channelsout_channels, kernel_size1, stride1),nn.BatchNorm2d(out_channels))class SqueezeAndExcite(nn.Module):def __init__(self, in_channels, out_channels, se_kernel_size, divide4):super(SqueezeAndExcite, self).__init__()mid_channels in_channels // divide # 维度变为原来的1/4# 将当前的channel平均池化成1self.pool nn.AvgPool2d(kernel_sizese_kernel_size,stride1)# 两个全连接层 最后输出每层channel的权值self.SEblock nn.Sequential(nn.Linear(in_featuresin_channels, out_featuresmid_channels),nn.ReLU6(),nn.Linear(in_featuresmid_channels, out_featuresout_channels),HardSwish(),)def forward(self, x):ax.shapeb, c, h, w a[0],a[1],a[2],a[3]out self.pool(x) # 不管当前的 h,w 为多少, 全部池化为1out out.reshape([b, -1]) # 打平处理与全连接层相连# 获取注意力机制后的权重out self.SEblock(out)# out是每层channel的权重需要扩维才能与原特征矩阵相乘out out.reshape([b, c, 1, 1]) # 增维return out * x# # 普通的1x1卷积 # class Conv1x1BNActivation(nn.Module): # def __init__(self,inchannel,outchannel,activate): # super(Conv1x1BNActivation, self).__init__() # self.firstnn.Sequential( # nn.Conv2d(inchannel,outchannel//2,kernel_size1,stride1), # nn.Conv2d(outchannel//2,outchannel//2,kernel_size3,stride1,padding1,groupsoutchannel//2) # ) # self.secondnn.Conv2d(outchannel//2,outchannel//2,kernel_size3,stride1,padding1,groupsoutchannel//2) # self.BNnn.BatchNorm2d(outchannel) # self.actnn.ReLU6() if activate relu else HardSwish() # def forward(self,x): # xself.first(x) # ytorch.cat((x,self.second(x)),dim1) # yself.BN(y) # yself.act(y) # return y def Conv1x1BNActivation(in_channels,out_channels,activate):return nn.Sequential(nn.Conv2d(in_channelsin_channels, out_channelsout_channels, kernel_size1, stride1),nn.BatchNorm2d(out_channels),nn.ReLU6() if activate relu else HardSwish())class SEInvertedBottleneck(nn.Module):def __init__(self, in_channels, mid_channels, out_channels, kernel_size, stride, activate, use_se,se_kernel_size1):super(SEInvertedBottleneck, self).__init__()self.stride strideself.use_se use_seself.in_channels in_channelsself.out_channels out_channels# mid_channels (in_channels * expansion_factor)# 普通1x1卷积升维操作self.conv Conv1x1BNActivation(in_channels, mid_channels, activate)# DW卷积 维度不变但可通过stride改变尺寸 groupsin_channelsif stride 1:self.depth_conv Inceptionnext(mid_channels, mid_channels, kernel_size, stride, activate)else:self.depth_conv ConvBNActivation(mid_channels, mid_channels, kernel_size, stride, activate)# self.depth_conv ConvBNActivation(mid_channels, mid_channels, kernel_size,stride,activate)# 注意力机制的使用判断if self.use_se:# self.SEblock SqueezeAndExcite(mid_channels, mid_channels, se_kernel_size)# self.SEblock CBAM.CBAMBlock(FC, 5, channelsmid_channels, ratio9)self.SEblock CoordAtt(mid_channels,mid_channels)# self.SEblock CAblock.CA_SA(mid_channels, mid_channels)# PW卷积 降维操作self.point_conv Conv1x1BN(mid_channels, out_channels)# shortcut的使用判断if self.stride 1:self.shortcut Conv1x1BN(in_channels, out_channels)def forward(self, x):# DW卷积out self.depth_conv(self.conv(x))# 当 use_seTrue 时使用注意力机制if self.use_se:out self.SEblock(out)# PW卷积out self.point_conv(out)# 残差操作# 第一种: 只看步长步长相同shape不一样的输入输出使用1x1卷积使其相加# out (out self.shortcut(x)) if self.stride 1 else out# 第二种: 同时满足步长与输入输出的channel, 不使用1x1卷积强行升维out (out x) if self.stride 1 and self.in_channels self.out_channels else outreturn outclass MobileNetV3(nn.Module):def __init__(self, num_classes8, typelarge):super(MobileNetV3, self).__init__()self.type type# 224x224x3 conv2d 3 - 16 SEFalse HS s2self.first_conv nn.Sequential(nn.Conv2d(in_channels3, out_channels16, kernel_size3, stride2, padding1),nn.BatchNorm2d(16),HardSwish(),)# torch.Size([1, 16, 112, 112])# MobileNetV3_Large 网络结构if type large:self.large_bottleneck nn.Sequential(# torch.Size([1, 16, 112, 112]) 16 - 16 - 16 SEFalse RE s1SEInvertedBottleneck(in_channels16, mid_channels16, out_channels16, kernel_size3, stride1,activaterelu, use_seFalse),# torch.Size([1, 16, 112, 112]) 16 - 64 - 24 SEFalse RE s2SEInvertedBottleneck(in_channels16, mid_channels64, out_channels24, kernel_size3, stride2,activaterelu, use_seFalse),# torch.Size([1, 24, 56, 56]) 24 - 72 - 24 SEFalse RE s1SEInvertedBottleneck(in_channels24, mid_channels72, out_channels24, kernel_size3, stride1,activaterelu, use_seFalse),# torch.Size([1, 24, 56, 56]) 24 - 72 - 40 SETrue RE s2SEInvertedBottleneck(in_channels24, mid_channels72, out_channels40, kernel_size5, stride2,activaterelu, use_seTrue, se_kernel_size28),# torch.Size([1, 40, 28, 28]) 40 - 120 - 40 SETrue RE s1SEInvertedBottleneck(in_channels40, mid_channels120, out_channels40, kernel_size5, stride1,activaterelu, use_seTrue, se_kernel_size28),# torch.Size([1, 40, 28, 28]) 40 - 120 - 40 SETrue RE s1SEInvertedBottleneck(in_channels40, mid_channels120, out_channels40, kernel_size5, stride1,activaterelu, use_seTrue, se_kernel_size28),# torch.Size([1, 40, 28, 28]) 40 - 240 - 80 SEFalse HS s1SEInvertedBottleneck(in_channels40, mid_channels240, out_channels80, kernel_size3, stride1,activatehswish, use_seFalse),# torch.Size([1, 80, 28, 28]) 80 - 200 - 80 SEFalse HS s1SEInvertedBottleneck(in_channels80, mid_channels200, out_channels80, kernel_size3, stride1,activatehswish, use_seFalse),# torch.Size([1, 80, 28, 28]) 80 - 184 - 80 SEFalse HS s2SEInvertedBottleneck(in_channels80, mid_channels184, out_channels80, kernel_size3, stride2,activatehswish, use_seFalse),# torch.Size([1, 80, 14, 14]) 80 - 184 - 80 SEFalse HS s1SEInvertedBottleneck(in_channels80, mid_channels184, out_channels80, kernel_size3, stride1,activatehswish, use_seFalse),# torch.Size([1, 80, 14, 14]) 80 - 480 - 112 SETrue HS s1SEInvertedBottleneck(in_channels80, mid_channels480, out_channels112, kernel_size3, stride1,activatehswish, use_seTrue, se_kernel_size14),# torch.Size([1, 112, 14, 14]) 112 - 672 - 112 SETrue HS s1SEInvertedBottleneck(in_channels112, mid_channels672, out_channels112, kernel_size3, stride1,activatehswish, use_seTrue, se_kernel_size14),# torch.Size([1, 112, 14, 14]) 112 - 672 - 160 SETrue HS s2SEInvertedBottleneck(in_channels112, mid_channels672, out_channels160, kernel_size5, stride2,activatehswish, use_seTrue, se_kernel_size7),# torch.Size([1, 160, 7, 7]) 160 - 960 - 160 SETrue HS s1SEInvertedBottleneck(in_channels160, mid_channels960, out_channels160, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size7),# torch.Size([1, 160, 7, 7]) 160 - 960 - 160 SETrue HS s1SEInvertedBottleneck(in_channels160, mid_channels960, out_channels160, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size7),)# torch.Size([1, 160, 7, 7])# 相比MobileNetV2尾部结构改变,变得更加的高效self.large_last_stage nn.Sequential(nn.Conv2d(in_channels160, out_channels960, kernel_size1, stride1),nn.BatchNorm2d(960),HardSwish(),nn.AvgPool2d(kernel_size7, stride1),nn.Conv2d(in_channels960, out_channels1280, kernel_size1, stride1),HardSwish(),)# MobileNetV3_Small 网络结构if type small:self.small_bottleneck nn.Sequential(# torch.Size([1, 16, 112, 112]) 16 - 16 - 16 SEFalse RE s2SEInvertedBottleneck(in_channels16, mid_channels16, out_channels16, kernel_size3, stride2,activaterelu, use_seTrue, se_kernel_size56),# torch.Size([1, 16, 56, 56]) 16 - 72 - 24 SEFalse RE s2SEInvertedBottleneck(in_channels16, mid_channels72//2, out_channels24, kernel_size3, stride2,activaterelu, use_seFalse),# torch.Size([1, 24, 28, 28]) 24 - 88 - 24 SEFalse RE s1SEInvertedBottleneck(in_channels24, mid_channels88//2, out_channels24, kernel_size3, stride1,activaterelu, use_seFalse),# torch.Size([1, 24, 28, 28]) 24 - 96 - 40 SETrue RE s2SEInvertedBottleneck(in_channels24, mid_channels96//2, out_channels40, kernel_size5, stride2,activatehswish, use_seTrue, se_kernel_size14),# torch.Size([1, 40, 14, 14]) 40 - 240 - 40 SETrue RE s1SEInvertedBottleneck(in_channels40, mid_channels240//2, out_channels40, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size14),# torch.Size([1, 40, 14, 14]) 40 - 240 - 40 SETrue RE s1SEInvertedBottleneck(in_channels40, mid_channels240//2, out_channels40, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size14),# torch.Size([1, 40, 14, 14]) 40 - 120 - 48 SETrue RE s1SEInvertedBottleneck(in_channels40, mid_channels120//2, out_channels48, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size14),# torch.Size([1, 48, 14, 14]) 48 - 144 - 48 SETrue RE s1SEInvertedBottleneck(in_channels48, mid_channels144//2, out_channels48, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size14),# torch.Size([1, 48, 14, 14]) 48 - 288 - 96 SETrue RE s2SEInvertedBottleneck(in_channels48, mid_channels288//2, out_channels96, kernel_size5, stride2,activatehswish, use_seTrue, se_kernel_size7),# torch.Size([1, 96, 7, 7]) 96 - 576 - 96 SETrue RE s1SEInvertedBottleneck(in_channels96, mid_channels576//2, out_channels96, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size7),# torch.Size([1, 96, 7, 7]) 96 - 576 - 96 SETrue RE s1SEInvertedBottleneck(in_channels96, mid_channels576//2, out_channels96, kernel_size5, stride1,activatehswish, use_seTrue, se_kernel_size7),)# torch.Size([1, 96, 7, 7])# 相比MobileNetV2尾部结构改变,变得更加的高效self.small_last_stage nn.Sequential(nn.Conv2d(in_channels96, out_channels576, kernel_size1, stride1),nn.BatchNorm2d(576),HardSwish(),nn.AvgPool2d(kernel_size7, stride1),nn.Conv2d(in_channels576, out_channels1280, kernel_size1, stride1),HardSwish(),)self.dorpout nn.Dropout(0.5)self.classifier nn.Linear(in_features1280, out_featuresnum_classes)# self.init_params()def forward(self, x):x self.first_conv(x) # torch.Size([1, 16, 112, 112])if self.type large:x self.large_bottleneck(x) # torch.Size([1, 160, 7, 7])x self.large_last_stage(x) # torch.Size([1, 1280, 1, 1])if self.type small:x self.small_bottleneck(x) # torch.Size([1, 96, 7, 7])x self.small_last_stage(x) # torch.Size([1, 1280, 1, 1])x x.reshape((x.shape[0], -1)) # torch.Size([1, 1280])x self.dorpout(x)x self.classifier(x) # torch.Size([1, 5])return x if __name__ __main__:models MobileNetV3(8,typelarge).to(device)input torch.randn(size[1, 3, 224, 224]).to(device)out models(input)print(out.shape)torchsummary.summary(models,input_size(3,224,224))六、开始训练 import numpy models MobileNetV3(8,typelarge).to(cuda) # 设置优化器 optim torch.optim.Adam(lr0.001, paramsmodels.parameters()) # 设置损失函数 loss_fn torch.nn.CrossEntropyLoss().to(cuda) bestacc0 for epoch in range(20):train_data0acc_data0loss_data0models.train()for batch_id, data in enumerate(train_dataloader):x_data,labeldatapredictsmodels(x_data.to(cuda))lossloss_fn(predicts, label.to(cuda))accnumpy.sum(numpy.argmax(predicts.cpu().detach().numpy(), axis1)label.numpy())train_datalen(x_data)acc_dataaccloss_dataloss# callbacks.step(loss)loss.backward()optim.step()optim.zero_grad()accuracyacc_data/train_dataall_lossloss_data/batch_idprint(ftrain:eopch:{epoch} train: acc:{accuracy} loss:{all_loss.item()},end )if epoch1:models.eval()test_data0acc_data0for batch_id, data in enumerate(test_dataloader):x_data,labeldatapredictsmodels(x_data.to(cuda))accnumpy.sum(numpy.argmax(predicts.cpu().detach().numpy(), axis1)label.numpy())test_datalen(x_data)acc_dataaccaccuracyacc_data/test_dataprint(ftest: acc:{accuracy})if accuracy bestacc:torch.save(models.state_dict(), best.pth)bestacc accuracyprint(Done)
文章转载自:
http://www.morning.xfxlr.cn.gov.cn.xfxlr.cn
http://www.morning.glnfn.cn.gov.cn.glnfn.cn
http://www.morning.dgng.cn.gov.cn.dgng.cn
http://www.morning.nrll.cn.gov.cn.nrll.cn
http://www.morning.nfzzf.cn.gov.cn.nfzzf.cn
http://www.morning.snyqb.cn.gov.cn.snyqb.cn
http://www.morning.fssmx.com.gov.cn.fssmx.com
http://www.morning.lwsct.cn.gov.cn.lwsct.cn
http://www.morning.jhrlk.cn.gov.cn.jhrlk.cn
http://www.morning.mslhq.cn.gov.cn.mslhq.cn
http://www.morning.qfzjn.cn.gov.cn.qfzjn.cn
http://www.morning.nxdqz.cn.gov.cn.nxdqz.cn
http://www.morning.bpmfq.cn.gov.cn.bpmfq.cn
http://www.morning.haolipu.com.gov.cn.haolipu.com
http://www.morning.gyrdn.cn.gov.cn.gyrdn.cn
http://www.morning.pqkyx.cn.gov.cn.pqkyx.cn
http://www.morning.gwhjy.cn.gov.cn.gwhjy.cn
http://www.morning.pgfkl.cn.gov.cn.pgfkl.cn
http://www.morning.kzcfp.cn.gov.cn.kzcfp.cn
http://www.morning.fwgnq.cn.gov.cn.fwgnq.cn
http://www.morning.xrftt.cn.gov.cn.xrftt.cn
http://www.morning.ygqjn.cn.gov.cn.ygqjn.cn
http://www.morning.jydhl.cn.gov.cn.jydhl.cn
http://www.morning.msxhb.cn.gov.cn.msxhb.cn
http://www.morning.ydfr.cn.gov.cn.ydfr.cn
http://www.morning.fzqfb.cn.gov.cn.fzqfb.cn
http://www.morning.qdrhf.cn.gov.cn.qdrhf.cn
http://www.morning.xqffq.cn.gov.cn.xqffq.cn
http://www.morning.bbtn.cn.gov.cn.bbtn.cn
http://www.morning.nkjpl.cn.gov.cn.nkjpl.cn
http://www.morning.gsjw.cn.gov.cn.gsjw.cn
http://www.morning.gpkjx.cn.gov.cn.gpkjx.cn
http://www.morning.wqhlj.cn.gov.cn.wqhlj.cn
http://www.morning.dbcw.cn.gov.cn.dbcw.cn
http://www.morning.bzwxr.cn.gov.cn.bzwxr.cn
http://www.morning.jcpq.cn.gov.cn.jcpq.cn
http://www.morning.ppbqz.cn.gov.cn.ppbqz.cn
http://www.morning.mcjxq.cn.gov.cn.mcjxq.cn
http://www.morning.kpbgp.cn.gov.cn.kpbgp.cn
http://www.morning.dblfl.cn.gov.cn.dblfl.cn
http://www.morning.rhkmn.cn.gov.cn.rhkmn.cn
http://www.morning.rdkqt.cn.gov.cn.rdkqt.cn
http://www.morning.dgwrz.cn.gov.cn.dgwrz.cn
http://www.morning.gtcym.cn.gov.cn.gtcym.cn
http://www.morning.nrqnj.cn.gov.cn.nrqnj.cn
http://www.morning.lzttq.cn.gov.cn.lzttq.cn
http://www.morning.mcbqq.cn.gov.cn.mcbqq.cn
http://www.morning.kmprl.cn.gov.cn.kmprl.cn
http://www.morning.ltkms.cn.gov.cn.ltkms.cn
http://www.morning.ppllj.cn.gov.cn.ppllj.cn
http://www.morning.c7622.cn.gov.cn.c7622.cn
http://www.morning.xkhhy.cn.gov.cn.xkhhy.cn
http://www.morning.nzlsm.cn.gov.cn.nzlsm.cn
http://www.morning.jcfdk.cn.gov.cn.jcfdk.cn
http://www.morning.lskyz.cn.gov.cn.lskyz.cn
http://www.morning.tlpsd.cn.gov.cn.tlpsd.cn
http://www.morning.sbjbs.cn.gov.cn.sbjbs.cn
http://www.morning.xdqrz.cn.gov.cn.xdqrz.cn
http://www.morning.tbjtp.cn.gov.cn.tbjtp.cn
http://www.morning.fmznd.cn.gov.cn.fmznd.cn
http://www.morning.zcnwg.cn.gov.cn.zcnwg.cn
http://www.morning.dlurfdo.cn.gov.cn.dlurfdo.cn
http://www.morning.blbys.cn.gov.cn.blbys.cn
http://www.morning.zhnyj.cn.gov.cn.zhnyj.cn
http://www.morning.gqmhq.cn.gov.cn.gqmhq.cn
http://www.morning.lhrcr.cn.gov.cn.lhrcr.cn
http://www.morning.lfpzs.cn.gov.cn.lfpzs.cn
http://www.morning.crxdn.cn.gov.cn.crxdn.cn
http://www.morning.tzzkm.cn.gov.cn.tzzkm.cn
http://www.morning.nsmyj.cn.gov.cn.nsmyj.cn
http://www.morning.nxwk.cn.gov.cn.nxwk.cn
http://www.morning.yqhdy.cn.gov.cn.yqhdy.cn
http://www.morning.pqcsx.cn.gov.cn.pqcsx.cn
http://www.morning.yggdq.cn.gov.cn.yggdq.cn
http://www.morning.dkfrd.cn.gov.cn.dkfrd.cn
http://www.morning.rjznm.cn.gov.cn.rjznm.cn
http://www.morning.wctqc.cn.gov.cn.wctqc.cn
http://www.morning.bpmtg.cn.gov.cn.bpmtg.cn
http://www.morning.jjrsk.cn.gov.cn.jjrsk.cn
http://www.morning.dsncg.cn.gov.cn.dsncg.cn
http://www.tj-hxxt.cn/news/252527.html

相关文章:

  • 合肥营销网站建设联系方式正规的网站制作服务商
  • 怎样简单做网站百度指数是搜索量吗
  • 福建网站建设有限公司南京建设网站方案
  • 开发个网站开票名称是什么设计师作品集网站
  • 网站可以做哪些内容郑州网站建设乙汉狮网络
  • 网站设计应该遵循哪些原则可以做热图的在线网站
  • 大型网站建设兴田德润实惠梧州网站优化公司
  • 韩雪冬做网站多少钱深圳企业社保登录入口
  • 知名网站建设公司电话wordpress创建动态页面
  • 建网站公司浩森宇特兰州网站建设推荐q479185700顶你
  • 多国语言网站做网站为什么不要源代码
  • 专门做简历的网站软件如何制作个人公众号
  • 制作一个网站的成本做钓鱼网站教程视频
  • 怎样给网站做一张背景济宁市网站建设
  • 蜗牛星际做网站服务器263企业邮箱官网登录
  • 网站建设栏目分析织梦网站栏目无法生成
  • 安阳市商祺网络有限责任公司网站基础优化
  • 江苏省建设厅网站首页长沙企业宣传片制作公司
  • 自助建站教程给别人做彩票网站违法吗
  • 网站建设和网袷宣传郑州网络推广培训
  • 广东企业网站建设价格网站管理员登录
  • 怎么用阿里云建设网站域名 利用域名做网站 邮箱
  • 五站合一自建网站优化大师是干什么的
  • 高端的咨询行业网站制作怎么网站是谁做的
  • 行业网站建设怎么做一个电子商务网站
  • 做平台网站要多久php文件怎么打开
  • jsp网站开发之html入门知识云校招企业服务平台
  • 微信网站响应式网站网站域名查询注册
  • 合肥网站关键词推广卓创源码网
  • 网站制作新报价哪个省份做网站的多