当前位置: 首页 > news >正文

科技画4k纸科幻画seo海外

科技画4k纸科幻画,seo海外,东平县住房和建设局网站,网站制作报价表学习参考来自 CNN可视化Convolutional Featureshttps://github.com/wmn7/ML_Practice/blob/master/2019_05_27/filter_visualizer.ipynb 文章目录 filter 的激活值 filter 的激活值 原理:找一张图片,使得某个 layer 的 filter 的激活值最大&#xff0c…

在这里插入图片描述

学习参考来自

  • CNN可视化Convolutional Features
  • https://github.com/wmn7/ML_Practice/blob/master/2019_05_27/filter_visualizer.ipynb

文章目录

  • filter 的激活值


filter 的激活值

原理:找一张图片,使得某个 layer 的 filter 的激活值最大,这张图片就是能被这个 filter 所检测的对象。

来个案例,流程:

  1. 初始化一张图片, 56X56
  2. 使用预训练好的 VGG16 网络,固定网络参数;
  3. 若想可视化第 40 层 layer 的第 k 个 filter 的 conv, 我们设置 loss 函数为 (-1*神经元激活值);
  4. 梯度下降, 对初始图片进行更新;
  5. 对得到的图片X1.2, 得到新的图片,重复上面的步骤;

其中第五步比较关键,我们可以看到初始化的图片不是很大,只有56X56. 这是因为原文作者在实际做的时候发现,若初始图片较大,得到的特征的频率会较高,即没有现在这么好的显示效果。

import torch
from torch.autograd import Variable
from PIL import Image, ImageOps
import torchvision.transforms as transforms
import torchvision.models as modelsimport numpy as np
import cv2
from cv2 import resize
from matplotlib import pyplot as pltdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")"initialize input image"
sz = 56
img = np.uint(np.random.uniform(150, 180, (3, sz, sz))) / 255  # (3, 56, 56)
img = torch.from_numpy(img[None]).float().to(device)  # (1, 3, 56, 56)"pretrained model"
model_vgg16 = models.vgg16_bn(pretrained=True).features.to(device).eval()
# downloading /home/xxx/.cache/torch/hub/checkpoints/vgg16_bn-6c64b313.pth, 500M+
# print(model_vgg16)
# print(len(list(model_vgg16.children())))  # 44
# print(list(model_vgg16.children()))"get the filter's output of one layer"
# 使用hook来得到网络中间层的输出
class SaveFeatures():def __init__(self, module):self.hook = module.register_forward_hook(self.hook_fn)def hook_fn(self, module, input, output):self.features = output.clone()def close(self):self.hook.remove()layer = 42
activations = SaveFeatures(list(model_vgg16.children())[layer])"backpropagation, setting hyper-parameters"
lr = 0.1
opt_steps = 25 # 迭代次数
filters = 265 # layer 42 的第 265 个 filter,使其激活值最大
upscaling_steps = 13 # 图像放大次数
blur = 3
upscaling_factor = 1.2 # 放大倍率"preprocessing of datasets"
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).view(-1, 1, 1).to(device)
cnn_normalization_std = torch.tensor([0.299, 0.224, 0.225]).view(-1, 1, 1).to(device)"gradient descent"
for epoch in range(upscaling_steps):  # scale the image up up_scaling_steps timesimg = (img - cnn_normalization_mean) / cnn_normalization_stdimg[img > 1] = 1img[img < 0] = 0print("Image Shape1:", img.shape)img_var = Variable(img, requires_grad=True)  # convert image to Variable that requires grad"optimizer"optimizer = torch.optim.Adam([img_var], lr=lr, weight_decay=1e-6)for n in range(opt_steps):optimizer.zero_grad()model_vgg16(img_var)  # forwardloss = -activations.features[0, filters].mean()  # max the activationsloss.backward()optimizer.step()"restore the image"print("Loss:", loss.cpu().detach().numpy())img = img_var * cnn_normalization_std + cnn_normalization_meanimg[img>1] = 1img[img<0] = 0img = img.data.cpu().numpy()[0].transpose(1,2,0)sz = int(upscaling_factor * sz)  # calculate new image sizeimg = cv2.resize(img, (sz, sz), interpolation=cv2.INTER_CUBIC)  # scale image upif blur is not None:img = cv2.blur(img, (blur, blur))  # blur image to reduce high frequency patternsprint("Image Shape2:", img.shape)img = torch.from_numpy(img.transpose(2, 0, 1)[None]).to(device)print("Image Shape3:", img.shape)print(str(epoch), ", Finished")print("="*10)activations.close()  # remove the hookimage = img.cpu().clone()
image = image.squeeze(0)
unloader = transforms.ToPILImage()image = unloader(image)
image = cv2.cvtColor(np.asarray(image), cv2.COLOR_RGB2BGR)
cv2.imwrite("res1.jpg", image)
torch.cuda.empty_cache()"""
Image Shape1: torch.Size([1, 3, 56, 56])
Loss: -6.0634975
Image Shape2: (67, 67, 3)
Image Shape3: torch.Size([1, 3, 67, 67])
0 , Finished
==========
Image Shape1: torch.Size([1, 3, 67, 67])
Loss: -7.8898916
Image Shape2: (80, 80, 3)
Image Shape3: torch.Size([1, 3, 80, 80])
1 , Finished
==========
Image Shape1: torch.Size([1, 3, 80, 80])
Loss: -8.730318
Image Shape2: (96, 96, 3)
Image Shape3: torch.Size([1, 3, 96, 96])
2 , Finished
==========
Image Shape1: torch.Size([1, 3, 96, 96])
Loss: -9.697872
Image Shape2: (115, 115, 3)
Image Shape3: torch.Size([1, 3, 115, 115])
3 , Finished
==========
Image Shape1: torch.Size([1, 3, 115, 115])
Loss: -10.190881
Image Shape2: (138, 138, 3)
Image Shape3: torch.Size([1, 3, 138, 138])
4 , Finished
==========
Image Shape1: torch.Size([1, 3, 138, 138])
Loss: -10.315895
Image Shape2: (165, 165, 3)
Image Shape3: torch.Size([1, 3, 165, 165])
5 , Finished
==========
Image Shape1: torch.Size([1, 3, 165, 165])
Loss: -9.73861
Image Shape2: (198, 198, 3)
Image Shape3: torch.Size([1, 3, 198, 198])
6 , Finished
==========
Image Shape1: torch.Size([1, 3, 198, 198])
Loss: -9.503629
Image Shape2: (237, 237, 3)
Image Shape3: torch.Size([1, 3, 237, 237])
7 , Finished
==========
Image Shape1: torch.Size([1, 3, 237, 237])
Loss: -9.488493
Image Shape2: (284, 284, 3)
Image Shape3: torch.Size([1, 3, 284, 284])
8 , Finished
==========
Image Shape1: torch.Size([1, 3, 284, 284])
Loss: -9.100454
Image Shape2: (340, 340, 3)
Image Shape3: torch.Size([1, 3, 340, 340])
9 , Finished
==========
Image Shape1: torch.Size([1, 3, 340, 340])
Loss: -8.699549
Image Shape2: (408, 408, 3)
Image Shape3: torch.Size([1, 3, 408, 408])
10 , Finished
==========
Image Shape1: torch.Size([1, 3, 408, 408])
Loss: -8.90135
Image Shape2: (489, 489, 3)
Image Shape3: torch.Size([1, 3, 489, 489])
11 , Finished
==========
Image Shape1: torch.Size([1, 3, 489, 489])
Loss: -8.838546
Image Shape2: (586, 586, 3)
Image Shape3: torch.Size([1, 3, 586, 586])
12 , Finished
==========Process finished with exit code 0
"""

得到特征图

请添加图片描述
网上找个图片测试下,看响应是不是最大

测试图片

请添加图片描述

import torch
from torch.autograd import Variable
from PIL import Image, ImageOps
import torchvision.transforms as transforms
import torchvision.models as modelsimport numpy as np
import cv2
from cv2 import resize
from matplotlib import pyplot as pltdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")class SaveFeatures():def __init__(self, module):self.hook = module.register_forward_hook(self.hook_fn)def hook_fn(self, module, input, output):self.features = output.clone()def close(self):self.hook.remove()size = (224, 224)
picture = Image.open("./bird.jpg").convert("RGB")
picture = ImageOps.fit(picture, size, Image.ANTIALIAS)loader = transforms.ToTensor()
picture = loader(picture).to(device)
print(picture.shape)cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).view(-1, 1, 1).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).view(-1, 1, 1).to(device)picture = (picture-cnn_normalization_mean) / cnn_normalization_stdmodel_vgg16 = models.vgg16_bn(pretrained=True).features.to(device).eval()
print(list(model_vgg16.children())[40])  # Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
print(list(model_vgg16.children())[41])  # BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
print(list(model_vgg16.children())[42])  # ReLU(inplace=True)layer = 42
filters = 265
activations = SaveFeatures(list(model_vgg16.children())[layer])with torch.no_grad():picture_var = Variable(picture[None])model_vgg16(picture_var)
activations.close()print(activations.features.shape)  # torch.Size([1, 512, 14, 14])# 画出每个 filter 的平均值
mean_act = [activations.features[0, i].mean().item() for i in range(activations.features.shape[1])]
plt.figure(figsize=(7,5))
act = plt.plot(mean_act, linewidth=2.)
extraticks = [filters]
ax = act[0].axes
ax.set_xlim(0, 500)
plt.axvline(x=filters, color="gray", linestyle="--")
ax.set_xlabel("feature map")
ax.set_ylabel("mane activation")
ax.set_xticks([0, 200, 400] + extraticks)
plt.show()"""
torch.Size([3, 224, 224])
Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
ReLU(inplace=True)
torch.Size([1, 512, 14, 14])
"""

请添加图片描述

可以看到,265 特征图对该输入的相应最高

总结:实测了其他 layer 和 filter,画出来的直方图中,对应的 filter 相应未必是最高的,不过也很高,可能找的待测图片并不是最贴合设定 layer 的某个 filter 的特征。

http://www.tj-hxxt.cn/news/40125.html

相关文章:

  • 网站管理工作百度查看订单
  • 做明星网站免费b站推广网站入口
  • 湖州网站网站建设游戏推广怎么做
  • wordpress增加管理员权限兰州seo推广
  • 网站建设开发的目的seo资源咨询
  • 西安网站建设制作专业公司软件排名优化
  • 云平台网站优化nba最新消息交易情况
  • 大连市建委培训官方网站哪里有学市场营销培训班
  • 东莞有什么比较好的网站公司整合营销传播案例
  • 自己的网站怎么做美工seo综合查询怎么用的
  • 垂直类网站怎么做推广又一病毒来了比新冠可怕
  • 公司展厅北京seo排名外包
  • 网站开发背景图百度指数官方
  • 图片网站怎么做排名维普网论文收录查询
  • ps设计网站广州seo报价
  • 合肥网站推广 公司网络舆情信息
  • 网站改版升级方案简述网络营销的主要方法
  • 网站搭建合同宁波seo怎么做优化
  • 红色色系做网站的配色网页制作教程步骤
  • 做企业网站域名东莞百度seo排名
  • 沈阳做网站直播的公司推广普通话宣传语
  • 北京房山区住房和城乡建设委员会网站夸克浏览器网页版入口
  • 洛阳网络建站摘抄一小段新闻
  • 什么网站代做毕业设计比较好长沙网络推广哪家
  • 网站建设项目报价清单杭州网站设计公司
  • 网站的专题模板制作软件做网站建设公司
  • 黄埔定制型网站建设搜狗官方网站
  • 濮阳做网站上海seo服务
  • 网站不可以做哪些东西株洲今日头条新闻
  • 网站测试哪些东莞seo公司