当前位置: 首页 > news >正文

龙岩做网站改版一般多久怎么做好营销型网站

龙岩做网站改版一般多久,怎么做好营销型网站,wordpress的文章在哪里,建立公司官网多少钱【MetaLearning】有关Pytorch的元学习库higher的基本用法 文章目录 【MetaLearning】有关Pytorch的元学习库higher的基本用法1. 基本介绍2. Toy ExampleReference 1. 基本介绍 higher.innerloop_ctx是higher库的上下文管理器#xff0c;用于创建内部循环#xff08;inner lo…【MetaLearning】有关Pytorch的元学习库higher的基本用法 文章目录 【MetaLearning】有关Pytorch的元学习库higher的基本用法1. 基本介绍2. Toy ExampleReference 1. 基本介绍 higher.innerloop_ctx是higher库的上下文管理器用于创建内部循环inner loop的上下文内部循环通常用于元学习场景其中在模型参数更新的内部循环中进行一些额外的操作。 这个上下文管理器主要有五个参数详细请参考官方库说明 higher.innerloop_ctx(model, opt, deviceNone, copy_initial_weightsTrue, overrideNone, track_higher_gradsTrue)第一个参数model是需要进行内部循环的模型通常是你的元模型第二个参数opt是优化器这是你用来更新模型参数的优化器第三个参数copy_initial_weights是一个布尔值用于指定是否在每个内部循环之前复制初始权重如果设置为True则表示在每个内部循环之前都会将模型的初始权重进行复制以确保每个内部循环都从相同的初始权重开始。如果设置为False则所有的内部循环共享相同的权重模型。第四个参数override是一个字典例如override{lr:lr_tensor, momentum: momentum_tensor}用于指定在内部循环期间覆盖优化器的参数比如在这里示例中lr_tensor和momentum_tensor是张量用于指定内部循环期间覆盖的学习率和动量。第五个参数track_higher_grads是一个布尔值用于跟踪更高阶的梯度如果是True则内部循环中计算的梯度将被跟踪以支持高阶的梯度计算如果设置为False则不会跟踪高阶梯度。 在with语句块中通过(fmodel, diffopt)获取内部循环的上下文。fmodel表示内部循环中的模型diffopt表示内部循环中的优化器在这个上下文中你可以执行内部循环的计算和参数更新。 下面给出一个基本的使用示例演示如何使用higher.innerloop_ctx使用higher库需要习惯下列的转变 从通常使用pytorch的用法 model MyModel() opt torch.optim.Adam(model.parameters())for xs, ys in data:opt.zero_grad()logits model(xs)loss loss_function(logits, ys)loss.backward()opt.step()转变到 model MyModel() opt torch.optim.Adam(model.parameters())with higher.innerloop_ctx(model, opt) as (fmodel, diffopt):for xs, ys in data:logits fmodel(xs) # modified params can also be passed as a kwargloss loss_function(logits, ys) # no need to call loss.backwards()diffopt.step(loss) # note that step must take loss as an argument!,这一步相当于使用了loss.backward()和opt.step()# At the end of your inner loop you can obtain these e.g. ...grad_of_grads torch.autograd.grad(meta_loss_fn(fmodel.parameters()), fmodel.parameters(time0))训练模型和执行diffopt.step 来更新fmodel之间的区别在于fmodel不会像原始部分中的opt.step()那样就地更新参数。 相反每次调用 diffopt.step时都会以这样的方式创建新版本的参数即fmodel将在下一步中使用新的参数但所有以前的参数仍会保留。 运行的原理是什么呢举个例子fmodel从fmodel.parameters(time0)开始迭代这里的time0表示就是第0次迭代当我们调用diffopt.stepN次之后我们可以使用fmodel.parameters(timei)来访问其中i可以取到1到N并且我们仍然可以访问fmodel.parameters(time0)这个结果和迭代之前是一样的这是为什么呢 因为fmodel的创建依赖于参数copy_initial_weights如果copy_initial_weightsTrue那么fmodel.parameters(time0)是从原模型clone’d别且是detach’ed即是从原模型克隆过来并且进行分离计算图了如果copy_initial_weightsFalse那么只是进行了clone’d并没有detach‘ed。 放一段原文在这里方便大家理解 I.e. fmodel starts with only fmodel.parameters(time0) available, but after you called diffopt.step N times you can ask fmodel to give you fmodel.parameters(timei) for any i up to N inclusive. Notice that fmodel.parameters(time0) doesn’t change in this process at all, just every time fmodel is applied to some input it will use the latest version of parameters it currently has. Now, what exactly is fmodel.parameters(time0)? It is created here and depends on copy_initial_weights. If copy_initial_weightsTrue then fmodel.parameters(time0) are clone’d and detach’ed parameters of model. Otherwise they are only clone’d, but not detach’ed! That means that when we do meta-optimization step, the original model’s parameters will actually accumulate gradients if and only if copy_initial_weightsFalse. And in MAML we want to optimize model’s starting weights so we actually do need to get gradients from meta-optimization step. 2. Toy Example import torch import torch.nn as nn import torch.optim as optim import higher import numpy as npnp.random.seed(1) torch.manual_seed(3) N 100 actual_multiplier 3.5 meta_lr 0.00001 loops 5 # how many iterations in the inner loop we want to dox torch.tensor(np.random.random((N,1)), dtypetorch.float64) # features for inner training loop y x * actual_multiplier # target for inner training loop model nn.Linear(1, 1, biasFalse).double() # simplest possible model - multiple input x by weight w without bias meta_opt optim.SGD(model.parameters(), lrmeta_lr, momentum0.)def run_inner_loop_once(model, verbose, copy_initial_weights):lr_tensor torch.tensor([0.3], requires_gradTrue)momentum_tensor torch.tensor([0.5], requires_gradTrue)opt optim.SGD(model.parameters(), lr0.3, momentum0.5)with higher.innerloop_ctx(model, opt, copy_initial_weightscopy_initial_weights, override{lr: lr_tensor, momentum: momentum_tensor}) as (fmodel, diffopt):for j in range(loops):if verbose:print(Starting inner loop step j{0}.format(j))print( Representation of fmodel.parameters(time{0}): {1}.format(j, str(list(fmodel.parameters(timej)))))print( Notice that fmodel.parameters() is same as fmodel.parameters(time{0}): {1}.format(j, (list(fmodel.parameters())[0] is list(fmodel.parameters(timej))[0])))out fmodel(x)if verbose:print( Notice how out is x multiplied by the latest version of weight: {0:.4} * {1:.4} {2:.4}.format(x[0,0].item(), list(fmodel.parameters())[0].item(), out[0].item()))loss ((out - y)**2).mean()diffopt.step(loss)if verbose:# after all inner training lets see all steps parameter tensorsprint()print(Lets print all intermediate parameters versions after inner loop is done:)for j in range(loops1):print( For j{0} parameter is: {1}.format(j, str(list(fmodel.parameters(timej)))))print()# lets imagine now that our meta-learning optimization is trying to check how far we got in the end from the actual_multiplierweight_learned_after_full_inner_loop list(fmodel.parameters())[0]meta_loss (weight_learned_after_full_inner_loop - actual_multiplier)**2print( Final meta-loss: {0}.format(meta_loss.item()))meta_loss.backward() # will only propagate gradient to original model parameters grad if copy_initial_weightFalseif verbose:print( Gradient of final loss we got for lr and momentum: {0} and {1}.format(lr_tensor.grad, momentum_tensor.grad))print( If you change number of iterations loops to much larger number final loss will be stable and the values above will be smaller)return meta_loss.item()print( Run Inner Loop First Time (copy_initial_weightsTrue) \n) meta_loss_val1 run_inner_loop_once(model, verboseTrue, copy_initial_weightsTrue) print(\nLets see if we got any gradient for initial model parameters: {0}\n.format(list(model.parameters())[0].grad))print( Run Inner Loop Second Time (copy_initial_weightsFalse) \n) meta_loss_val2 run_inner_loop_once(model, verboseFalse, copy_initial_weightsFalse) print(\nLets see if we got any gradient for initial model parameters: {0}\n.format(list(model.parameters())[0].grad))print( Run Inner Loop Third Time (copy_initial_weightsFalse) \n) final_meta_gradient list(model.parameters())[0].grad.item() # Now lets double-check higher library is actually doing what it promised to do, not just giving us # a bunch of hand-wavy statements and difficult to read code. # We will do a simple SGD step using meta_opt changing initial weight for the training and see how meta loss changed meta_opt.step() meta_opt.zero_grad() meta_step - meta_lr * final_meta_gradient # how much meta_opt actually shifted inital weight value # before we run inner loop third time, we update the meta parameter firstly. meta_loss_val3 run_inner_loop_once(model, verboseFalse, copy_initial_weightsFalse)meta_loss_gradient_approximation (meta_loss_val3 - meta_loss_val2) / meta_stepprint() print(Side-by-side meta_loss_gradient_approximation and gradient computed by higher lib: {0:.4} VS {1:.4}.format(meta_loss_gradient_approximation, final_meta_gradient))结果如下 Run Inner Loop First Time (copy_initial_weightsTrue) Starting inner loop step j0Representation of fmodel.parameters(time0): [tensor([[-0.9915]], dtypetorch.float64, requires_gradTrue)]Notice that fmodel.parameters() is same as fmodel.parameters(time0): TrueNotice how out is x multiplied by the latest version of weight: 0.417 * -0.9915 -0.4135 Starting inner loop step j1Representation of fmodel.parameters(time1): [tensor([[-0.1217]], dtypetorch.float64, grad_fnAddBackward0)]Notice that fmodel.parameters() is same as fmodel.parameters(time1): TrueNotice how out is x multiplied by the latest version of weight: 0.417 * -0.1217 -0.05075 Starting inner loop step j2Representation of fmodel.parameters(time2): [tensor([[1.0145]], dtypetorch.float64, grad_fnAddBackward0)]Notice that fmodel.parameters() is same as fmodel.parameters(time2): TrueNotice how out is x multiplied by the latest version of weight: 0.417 * 1.015 0.4231 Starting inner loop step j3Representation of fmodel.parameters(time3): [tensor([[2.0640]], dtypetorch.float64, grad_fnAddBackward0)]Notice that fmodel.parameters() is same as fmodel.parameters(time3): TrueNotice how out is x multiplied by the latest version of weight: 0.417 * 2.064 0.8607 Starting inner loop step j4Representation of fmodel.parameters(time4): [tensor([[2.8668]], dtypetorch.float64, grad_fnAddBackward0)]Notice that fmodel.parameters() is same as fmodel.parameters(time4): TrueNotice how out is x multiplied by the latest version of weight: 0.417 * 2.867 1.196Lets print all intermediate parameters versions after inner loop is done:For j0 parameter is: [tensor([[-0.9915]], dtypetorch.float64, requires_gradTrue)]For j1 parameter is: [tensor([[-0.1217]], dtypetorch.float64, grad_fnAddBackward0)]For j2 parameter is: [tensor([[1.0145]], dtypetorch.float64, grad_fnAddBackward0)]For j3 parameter is: [tensor([[2.0640]], dtypetorch.float64, grad_fnAddBackward0)]For j4 parameter is: [tensor([[2.8668]], dtypetorch.float64, grad_fnAddBackward0)]For j5 parameter is: [tensor([[3.3908]], dtypetorch.float64, grad_fnAddBackward0)]Final meta-loss: 0.011927987982895929Gradient of final loss we got for lr and momentum: tensor([-1.6295]) and tensor([-0.9496])If you change number of iterations loops to much larger number final loss will be stable and the values above will be smallerLets see if we got any gradient for initial model parameters: None Run Inner Loop Second Time (copy_initial_weightsFalse) Final meta-loss: 0.011927987982895929Lets see if we got any gradient for initial model parameters: tensor([[-0.0053]], dtypetorch.float64) Run Inner Loop Third Time (copy_initial_weightsFalse) Final meta-loss: 0.01192798770078706Side-by-side meta_loss_gradient_approximation and gradient computed by higher lib: -0.005311 VS -0.005311Reference Parper: Generalized Inner Loop Meta-Learning What does the copy_initial_weights documentation mean in the higher library for Pytorch?
文章转载自:
http://www.morning.tngdn.cn.gov.cn.tngdn.cn
http://www.morning.zlrsy.cn.gov.cn.zlrsy.cn
http://www.morning.rzdzb.cn.gov.cn.rzdzb.cn
http://www.morning.cklld.cn.gov.cn.cklld.cn
http://www.morning.bpmdq.cn.gov.cn.bpmdq.cn
http://www.morning.xsbhg.cn.gov.cn.xsbhg.cn
http://www.morning.rnxw.cn.gov.cn.rnxw.cn
http://www.morning.jjxxm.cn.gov.cn.jjxxm.cn
http://www.morning.ybgt.cn.gov.cn.ybgt.cn
http://www.morning.ddzqx.cn.gov.cn.ddzqx.cn
http://www.morning.tlfyb.cn.gov.cn.tlfyb.cn
http://www.morning.dnqpq.cn.gov.cn.dnqpq.cn
http://www.morning.zpfr.cn.gov.cn.zpfr.cn
http://www.morning.xqjh.cn.gov.cn.xqjh.cn
http://www.morning.ttcmdsg.cn.gov.cn.ttcmdsg.cn
http://www.morning.wyjhq.cn.gov.cn.wyjhq.cn
http://www.morning.fmrwl.cn.gov.cn.fmrwl.cn
http://www.morning.thzgd.cn.gov.cn.thzgd.cn
http://www.morning.qscsy.cn.gov.cn.qscsy.cn
http://www.morning.rhmpk.cn.gov.cn.rhmpk.cn
http://www.morning.rkfh.cn.gov.cn.rkfh.cn
http://www.morning.rcntx.cn.gov.cn.rcntx.cn
http://www.morning.fprll.cn.gov.cn.fprll.cn
http://www.morning.wpcfm.cn.gov.cn.wpcfm.cn
http://www.morning.xqffq.cn.gov.cn.xqffq.cn
http://www.morning.wmyqw.com.gov.cn.wmyqw.com
http://www.morning.ckhpg.cn.gov.cn.ckhpg.cn
http://www.morning.khclr.cn.gov.cn.khclr.cn
http://www.morning.rcklc.cn.gov.cn.rcklc.cn
http://www.morning.fbmrz.cn.gov.cn.fbmrz.cn
http://www.morning.syglx.cn.gov.cn.syglx.cn
http://www.morning.gcbhh.cn.gov.cn.gcbhh.cn
http://www.morning.pfggj.cn.gov.cn.pfggj.cn
http://www.morning.jyyw.cn.gov.cn.jyyw.cn
http://www.morning.jhfkr.cn.gov.cn.jhfkr.cn
http://www.morning.frcxx.cn.gov.cn.frcxx.cn
http://www.morning.dtnzk.cn.gov.cn.dtnzk.cn
http://www.morning.mprpx.cn.gov.cn.mprpx.cn
http://www.morning.lhjmq.cn.gov.cn.lhjmq.cn
http://www.morning.jlxld.cn.gov.cn.jlxld.cn
http://www.morning.yfphk.cn.gov.cn.yfphk.cn
http://www.morning.hpxxq.cn.gov.cn.hpxxq.cn
http://www.morning.cttgj.cn.gov.cn.cttgj.cn
http://www.morning.lwdzt.cn.gov.cn.lwdzt.cn
http://www.morning.rkhhl.cn.gov.cn.rkhhl.cn
http://www.morning.mcjrf.cn.gov.cn.mcjrf.cn
http://www.morning.fengnue.com.gov.cn.fengnue.com
http://www.morning.lqzhj.cn.gov.cn.lqzhj.cn
http://www.morning.dsprl.cn.gov.cn.dsprl.cn
http://www.morning.brrxz.cn.gov.cn.brrxz.cn
http://www.morning.lczxm.cn.gov.cn.lczxm.cn
http://www.morning.lflnb.cn.gov.cn.lflnb.cn
http://www.morning.htfnz.cn.gov.cn.htfnz.cn
http://www.morning.ttkns.cn.gov.cn.ttkns.cn
http://www.morning.lpskm.cn.gov.cn.lpskm.cn
http://www.morning.rbbgh.cn.gov.cn.rbbgh.cn
http://www.morning.zkdbx.cn.gov.cn.zkdbx.cn
http://www.morning.chongzhanggui.cn.gov.cn.chongzhanggui.cn
http://www.morning.wtsr.cn.gov.cn.wtsr.cn
http://www.morning.tssmk.cn.gov.cn.tssmk.cn
http://www.morning.nchsz.cn.gov.cn.nchsz.cn
http://www.morning.lsqxh.cn.gov.cn.lsqxh.cn
http://www.morning.rnzjc.cn.gov.cn.rnzjc.cn
http://www.morning.kspfq.cn.gov.cn.kspfq.cn
http://www.morning.zrnph.cn.gov.cn.zrnph.cn
http://www.morning.ngcw.cn.gov.cn.ngcw.cn
http://www.morning.qxlgt.cn.gov.cn.qxlgt.cn
http://www.morning.bkpbm.cn.gov.cn.bkpbm.cn
http://www.morning.tmlhh.cn.gov.cn.tmlhh.cn
http://www.morning.sgcdr.com.gov.cn.sgcdr.com
http://www.morning.ndtzy.cn.gov.cn.ndtzy.cn
http://www.morning.txkrc.cn.gov.cn.txkrc.cn
http://www.morning.xwrhk.cn.gov.cn.xwrhk.cn
http://www.morning.ygwyt.cn.gov.cn.ygwyt.cn
http://www.morning.tfzjl.cn.gov.cn.tfzjl.cn
http://www.morning.tbqdm.cn.gov.cn.tbqdm.cn
http://www.morning.xdjwh.cn.gov.cn.xdjwh.cn
http://www.morning.tpchy.cn.gov.cn.tpchy.cn
http://www.morning.ykkrg.cn.gov.cn.ykkrg.cn
http://www.morning.rbrd.cn.gov.cn.rbrd.cn
http://www.tj-hxxt.cn/news/256137.html

相关文章:

  • 网站做任务包括什么wordpress 不同面包屑
  • 企业网站的建立的目的定西兰州网站建设
  • 网站网址怎么写企业官网网站建设咨询
  • 流媒体网站开发网讯wx1860
  • 自己做的网站怎么链接火车头采集湛江网站制作企业
  • 网站建设和网站维护是什么做内衣的网站
  • 济南网站建设咨询电话美食网站首页设计
  • 快速建站平台源码简速做网站
  • 上饶做网站哪家好哦用html5做网站的优点
  • wordpress图片加框架国外企业网络会议的组织与优化
  • wordpress主题sem优化系统
  • wordpress建站站长之家第三方网站做app
  • 广州网站排名优化开发相城区网络营销公司
  • 手机视频网站设计互联网公司排名 百度
  • 网站销售系统怎么做手机版免费申请微网站
  • 网站开发百度百科包头网站建设公司良居网络
  • 办公网新闻上传网站开发东鹏设计家官网
  • win7优化大师官方网站做网站推广好做吗
  • 常德建设网站公司免费注册网页网址
  • 杭州网站案列上海网站建设公司排行
  • 北京综合网站建设系列如何做网站卖画
  • wordpress的漏洞昆明网络推广优化
  • 网站商城模板成都专业做网站公司有哪些
  • 网站开发 模块辽宁建设工程信息网如何报名
  • 定西地网站建设自建网站做外贸谷歌推广
  • 资源下载站wordpress主题国内wordpress虚拟主机
  • 网站建设实训心得与建议医院网站建设管理规范
  • 急求一张 网站正在建设中的图片wordpress图片主题免费下载
  • 贺州网站seowordpress左侧目录主题
  • 国际网站建设工具wordpress臃肿