当前位置: 首页 > news >正文

深圳模具外贸网站建设做网络销售如何找客户

深圳模具外贸网站建设,做网络销售如何找客户,党课网络培训网站建设功能需求分析,网站首页设计教程深度学习网络模型——RepVGG网络详解0 前言1 RepVGG Block详解2 结构重参数化2.1 融合Conv2d和BN2.2 Conv2dBN融合实验(Pytorch)2.3 将1x1卷积转换成3x3卷积2.4 将BN转换成3x3卷积2.5 多分支融合2.6 结构重参数化实验(Pytorch)3 模型配置论文名称: RepVGG: Making V…

深度学习网络模型——RepVGG网络详解

  • 0 前言
  • 1 RepVGG Block详解
  • 2 结构重参数化
    • 2.1 融合Conv2d和BN
    • 2.2 Conv2d+BN融合实验(Pytorch)
    • 2.3 将1x1卷积转换成3x3卷积
    • 2.4 将BN转换成3x3卷积
    • 2.5 多分支融合
    • 2.6 结构重参数化实验(Pytorch)
  • 3 模型配置

论文名称: RepVGG: Making VGG-style ConvNets Great Again
论文下载地址: https://arxiv.org/abs/2101.03697
官方源码(Pytorch实现): https://github.com/DingXiaoH/RepVGG

0 前言

在这里插入图片描述

1 RepVGG Block详解

在这里插入图片描述

2 结构重参数化

在这里插入图片描述

2.1 融合Conv2d和BN

在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

2.2 Conv2d+BN融合实验(Pytorch)

在这里插入图片描述

from collections import OrderedDictimport numpy as np
import torch
import torch.nn as nndef main():torch.random.manual_seed(0)f1 = torch.randn(1, 2, 3, 3)module = nn.Sequential(OrderedDict(conv=nn.Conv2d(in_channels=2, out_channels=2, kernel_size=3, stride=1, padding=1, bias=False),bn=nn.BatchNorm2d(num_features=2)))module.eval()with torch.no_grad():output1 = module(f1)print(output1)# fuse conv + bnkernel = module.conv.weight running_mean = module.bn.running_meanrunning_var = module.bn.running_vargamma = module.bn.weightbeta = module.bn.biaseps = module.bn.epsstd = (running_var + eps).sqrt()t = (gamma / std).reshape(-1, 1, 1, 1)  # [ch] -> [ch, 1, 1, 1]kernel = kernel * tbias = beta - running_mean * gamma / stdfused_conv = nn.Conv2d(in_channels=2, out_channels=2, kernel_size=3, stride=1, padding=1, bias=True)fused_conv.load_state_dict(OrderedDict(weight=kernel, bias=bias))with torch.no_grad():output2 = fused_conv(f1)print(output2)np.testing.assert_allclose(output1.numpy(), output2.numpy(), rtol=1e-03, atol=1e-05)print("convert module has been tested, and the result looks good!")if __name__ == '__main__':main()

终端输出结果:
在这里插入图片描述

2.3 将1x1卷积转换成3x3卷积

在这里插入图片描述

2.4 将BN转换成3x3卷积

在这里插入图片描述
代码截图如下所示:
在这里插入图片描述
在这里插入图片描述

2.5 多分支融合

在这里插入图片描述
代码截图:
在这里插入图片描述
图像演示:
在这里插入图片描述

2.6 结构重参数化实验(Pytorch)

import time
import torch.nn as nn
import numpy as np
import torchdef conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups=1):result = nn.Sequential()result.add_module('conv', nn.Conv2d(in_channels=in_channels, out_channels=out_channels,kernel_size=kernel_size, stride=stride, padding=padding,groups=groups, bias=False))result.add_module('bn', nn.BatchNorm2d(num_features=out_channels))return resultclass RepVGGBlock(nn.Module):def __init__(self, in_channels, out_channels, kernel_size=3,stride=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False):super(RepVGGBlock, self).__init__()self.deploy = deployself.groups = groupsself.in_channels = in_channelsself.nonlinearity = nn.ReLU()if deploy:self.rbr_reparam = nn.Conv2d(in_channels=in_channels, out_channels=out_channels,kernel_size=kernel_size, stride=stride,padding=padding, dilation=dilation, groups=groups,bias=True, padding_mode=padding_mode)else:self.rbr_identity = nn.BatchNorm2d(num_features=in_channels) \if out_channels == in_channels and stride == 1 else Noneself.rbr_dense = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size,stride=stride, padding=padding, groups=groups)self.rbr_1x1 = conv_bn(in_channels=in_channels, out_channels=out_channels, kernel_size=1,stride=stride, padding=0, groups=groups)def forward(self, inputs):if hasattr(self, 'rbr_reparam'):return self.nonlinearity(self.rbr_reparam(inputs))if self.rbr_identity is None:id_out = 0else:id_out = self.rbr_identity(inputs)return self.nonlinearity(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out)def get_equivalent_kernel_bias(self):kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense)kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1)kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity)return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasiddef _pad_1x1_to_3x3_tensor(self, kernel1x1):if kernel1x1 is None:return 0else:return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])def _fuse_bn_tensor(self, branch):if branch is None:return 0, 0if isinstance(branch, nn.Sequential):kernel = branch.conv.weightrunning_mean = branch.bn.running_meanrunning_var = branch.bn.running_vargamma = branch.bn.weightbeta = branch.bn.biaseps = branch.bn.epselse:assert isinstance(branch, nn.BatchNorm2d)if not hasattr(self, 'id_tensor'):input_dim = self.in_channels // self.groupskernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32)for i in range(self.in_channels):kernel_value[i, i % input_dim, 1, 1] = 1self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device)kernel = self.id_tensorrunning_mean = branch.running_meanrunning_var = branch.running_vargamma = branch.weightbeta = branch.biaseps = branch.epsstd = (running_var + eps).sqrt()t = (gamma / std).reshape(-1, 1, 1, 1)return kernel * t, beta - running_mean * gamma / stddef switch_to_deploy(self):if hasattr(self, 'rbr_reparam'):returnkernel, bias = self.get_equivalent_kernel_bias()self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.conv.in_channels,out_channels=self.rbr_dense.conv.out_channels,kernel_size=self.rbr_dense.conv.kernel_size, stride=self.rbr_dense.conv.stride,padding=self.rbr_dense.conv.padding, dilation=self.rbr_dense.conv.dilation,groups=self.rbr_dense.conv.groups, bias=True)self.rbr_reparam.weight.data = kernelself.rbr_reparam.bias.data = biasfor para in self.parameters():para.detach_()self.__delattr__('rbr_dense')self.__delattr__('rbr_1x1')if hasattr(self, 'rbr_identity'):self.__delattr__('rbr_identity')if hasattr(self, 'id_tensor'):self.__delattr__('id_tensor')self.deploy = Truedef main():f1 = torch.randn(1, 64, 64, 64)block = RepVGGBlock(in_channels=64, out_channels=64)block.eval()with torch.no_grad():output1 = block(f1)start_time = time.time()for _ in range(100):block(f1)print(f"consume time: {time.time() - start_time}")# re-parameterizationblock.switch_to_deploy()output2 = block(f1)start_time = time.time()for _ in range(100):block(f1)print(f"consume time: {time.time() - start_time}")np.testing.assert_allclose(output1.numpy(), output2.numpy(), rtol=1e-03, atol=1e-05)print("convert module has been tested, and the result looks good!")if __name__ == '__main__':main()

终端输出结果如下:
在这里插入图片描述
通过对比能够发现,结构重参数化后推理速度翻倍了,并且转换前后的输出保持一致。

3 模型配置

在这里插入图片描述

http://www.tj-hxxt.cn/news/40656.html

相关文章:

  • 公司网站客户案例成都seo培
  • 科技网站模板哪里注册域名最便宜
  • 人与马做的网站百度url提交
  • 如何看自己网站流量百度seo快速排名
  • 网站域名怎么用全自动引流推广软件
  • 荆门做网站的公司磁力岛
  • 哈尔滨网站设计公司好网络营销建议
  • 省建设干部培训中心网站在哪里做推广效果好
  • 婚庆行业网站建设方案1黑帽seo技术有哪些
  • 产品review网站怎么做百度资源搜索平台
  • 没有服务器建网站宁德市人社局官网
  • 音乐主题资源网站建设希爱力的作用与功效
  • 资中做网站多少钱百度竞价托管代运营
  • 网站做业务赚钱吗网络推广属于什么专业
  • 手机网站建设yu网络营销就业方向和前景
  • flash+xml网站模板可以看任何网站的浏览器
  • 安徽省住房城乡建设厅网站电工目前在哪个平台做推广好
  • 安徽建设监理协会网站seo视频
  • 企业模板网站建设优势分析淘宝排名查询工具
  • 青岛搜客网站建设公司防疫优化措施
  • 网站建设的7种流程图360搜索建站
  • 怎么用手机做刷赞网站郑州今日重大新闻
  • 在家做网站百度下载app下载安装
  • 做网站用python好吗安装百度一下
  • 个人做搜索网站违法吗给网站做seo的价格
  • 保定哪家做网站好微网站
  • 中山如何建网站产品推广方案
  • 外贸 模板网站 定制网站推广引流软件
  • 厦门网站建设推广专业代写文案的公司
  • 做视频网站用什么格式好aso优化推广公司