当前位置: 首页 > news >正文

东营市城乡建设局网站在线网站建设课程

东营市城乡建设局网站,在线网站建设课程,广东手机网站建设品牌,广告设计培训班分类目录#xff1a;《深入浅出PaddlePaddle函数》总目录 相关文章#xff1a; 深入浅出PaddlePaddle函数——paddle.Tensor 深入浅出PaddlePaddle函数——paddle.to_tensor 通过已知的data来创建一个Tensor#xff0c;Tensor类型为paddle.Tensor。data可以是scalar、tupl…分类目录《深入浅出PaddlePaddle函数》总目录 相关文章 · 深入浅出PaddlePaddle函数——paddle.Tensor · 深入浅出PaddlePaddle函数——paddle.to_tensor 通过已知的data来创建一个TensorTensor类型为paddle.Tensor。data可以是scalar、tuple、list、numpy.ndarray、paddle.Tensor。如果data已经是一个Tensor且dtype 、 place没有发生变化将不会发生Tensor的拷贝并返回原来的Tensor。 否则会创建一个新的 Tensor且不保留原来计算图。 语法 paddle.to_tensor(data, dtypeNone, placeNone, stop_gradientTrue)参数 data[scalar/tuple/list/ndarray/Tensor] 初始化Tensor的数据可以是scalar、tuple、list、numpy.ndarray、paddle.Tensor类型。dtype[可选str] 创建Tensor的数据类型可以是bool、float16、float32、float64、int8、int16、int32、int64、uint8、complex64、complex128。 默认值为None如果 data为 python 浮点类型则从get_default_dtype获取类型如果data为其他类型则会自动推导类型。place[可选, CPUPlace/CUDAPinnedPlace/CUDAPlace] 创建Tensor的设备位置可以是 CPUPlace、CUDAPinnedPlace、CUDAPlace。默认值为None使用全局的place。stop_gradient [可选bool] 是否阻断Autograd的梯度传导。默认值为True此时不进行梯度传传导。 返回值 通过data创建的 Tensor。 实例 import paddletype(paddle.to_tensor(1)) # class paddle.Tensorpaddle.to_tensor(1) # Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue, # [1])x paddle.to_tensor(1, stop_gradientFalse) print(x) # Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientFalse, # [1])paddle.to_tensor(x) # A new tensor will be created with default stop_gradientTrue # Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue, # [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], placepaddle.CPUPlace(), stop_gradientFalse) # Tensor(shape[2, 2], dtypefloat32, placeCPUPlace, stop_gradientFalse, # [[0.10000000, 0.20000000], # [0.30000001, 0.40000001]])type(paddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64)) # class paddle.Tensorpaddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64) # Tensor(shape[2, 2], dtypecomplex64, placeCPUPlace, stop_gradientTrue, # [[(11j), (20j)], # [(32j), (40j)]])函数实现 def to_tensor(data, dtypeNone, placeNone, stop_gradientTrue):rConstructs a paddle.Tensor from data ,which can be scalar, tuple, list, numpy\.ndarray, paddle\.Tensor.If the data is already a Tensor, copy will be performed and return a new tensor.If you only want to change stop_gradient property, please call Tensor.stop_gradient stop_gradient directly.Args:data(scalar|tuple|list|ndarray|Tensor): Initial data for the tensor.Can be a scalar, list, tuple, numpy\.ndarray, paddle\.Tensor.dtype(str|np.dtype, optional): The desired data type of returned tensor. Can be bool , float16 ,float32 , float64 , int8 , int16 , int32 , int64 , uint8,complex64 , complex128. Default: None, infers dtype from dataexcept for python float number which gets dtype from get_default_type .place(CPUPlace|CUDAPinnedPlace|CUDAPlace|str, optional): The place to allocate Tensor. Can beCPUPlace, CUDAPinnedPlace, CUDAPlace. Default: None, means global place. If place isstring, It can be cpu, gpu:x and gpu_pinned, where x is the index of the GPUs.stop_gradient(bool, optional): Whether to block the gradient propagation of Autograd. Default: True.Returns:Tensor: A Tensor constructed from data .Examples:.. code-block:: pythonimport paddletype(paddle.to_tensor(1))# class paddle.Tensorpaddle.to_tensor(1)# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue,# [1])x paddle.to_tensor(1, stop_gradientFalse)print(x)# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientFalse,# [1])paddle.to_tensor(x) # A new tensor will be created with default stop_gradientTrue# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue,# [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], placepaddle.CPUPlace(), stop_gradientFalse)# Tensor(shape[2, 2], dtypefloat32, placeCPUPlace, stop_gradientFalse,# [[0.10000000, 0.20000000],# [0.30000001, 0.40000001]])type(paddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64))# class paddle.Tensorpaddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64)# Tensor(shape[2, 2], dtypecomplex64, placeCPUPlace, stop_gradientTrue,# [[(11j), (20j)],# [(32j), (40j)]])place _get_paddle_place(place)if place is None:place _current_expected_place()if _non_static_mode():return _to_tensor_non_static(data, dtype, place, stop_gradient)# call assign for static graphelse:re_exp re.compile(r[(](.?)[)], re.S)place_str re.findall(re_exp, str(place))[0]with paddle.static.device_guard(place_str):return _to_tensor_static(data, dtype, stop_gradient)def full_like(x, fill_value, dtypeNone, nameNone):This function creates a tensor filled with fill_value which has identical shape of x and dtype.If the dtype is None, the data type of Tensor is same with x.Args:x(Tensor): The input tensor which specifies shape and data type. The data type can be bool, float16, float32, float64, int32, int64.fill_value(bool|float|int): The value to fill the tensor with. Note: this value shouldnt exceed the range of the output data type.dtype(np.dtype|str, optional): The data type of output. The data type can be oneof bool, float16, float32, float64, int32, int64. The default value is None, which means the outputdata type is the same as input.name(str, optional): For details, please refer to :ref:api_guide_Name. Generally, no setting is required. Default: None.Returns:Tensor: Tensor which is created according to x, fill_value and dtype.Examples:.. code-block:: pythonimport paddleinput paddle.full(shape[2, 3], fill_value0.0, dtypefloat32, nameinput)output paddle.full_like(input, 2.0)# [[2. 2. 2.]# [2. 2. 2.]]if dtype is None:dtype x.dtypeelse:if not isinstance(dtype, core.VarDesc.VarType):dtype convert_np_dtype_to_dtype_(dtype)if in_dygraph_mode():return _C_ops.full_like(x, fill_value, dtype, x.place)if _in_legacy_dygraph():return _legacy_C_ops.fill_any_like(x, value, fill_value, dtype, dtype)helper LayerHelper(full_like, **locals())check_variable_and_dtype(x,x,[bool, float16, float32, float64, int16, int32, int64],full_like,)check_dtype(dtype,dtype,[bool, float16, float32, float64, int16, int32, int64],full_like/zeros_like/ones_like,)out helper.create_variable_for_type_inference(dtypedtype)helper.append_op(typefill_any_like,inputs{X: [x]},attrs{value: fill_value, dtype: dtype},outputs{Out: [out]},)out.stop_gradient Truereturn out
http://www.tj-hxxt.cn/news/217637.html

相关文章:

  • 深圳网站建设公司哪里有广告设计网站都有哪些
  • 山东济南网站建设公司排名三位数的域名网站
  • 濮阳网站开发刷关键词排名seo
  • 网站搭建哪里找有名气做网站图片用什么格式最好
  • 做家政网上推广网站网站的整体风格包括
  • 外贸网站推广方法毕业设计指导网站开发
  • 哪里有做网站的团购网站怎么推广
  • 织梦 蓝色 个人网站博客网站源码怎么分析网站设计
  • 做网站定金是多少重庆建设招标网站
  • 电商网站开发教材网站为什么突然访问不了
  • 网页制作素材库哪个网站龙口网站制作多少钱
  • 饰品网站设计方案网页设计需要考什么证
  • 宣传手册设计模板外贸网站谷歌优化
  • 东莞网站建设制作公司传奇世界网游
  • 建设网站赚钱wordpress主题二次元模板
  • 河北综合网站建设系列无锡市梁溪区建设局网站
  • 做蛋糕网站有哪些seo 适合哪些行业
  • 南京高端网站开发做企业网站用什么软件
  • 一个网站一年要多少钱网站字体设计重要性
  • html5电影网站设计论文百度公司推广电话
  • 吉林省建设工程信息网站学校二级学院网站建设
  • 12306网站很难做吗邢台企业网站制作公司
  • 建设部执业资格网站建小说网站需要多少钱
  • 织梦 xml 网站地图做网站用什么颜色好
  • 企业网站的建设 摘要行业网站作用
  • 如何推进网站建设没有网站怎样做外贸
  • 门户网站建设管理工作方案网站建设所需的硬件设备
  • 微信支付申请网站吗域名与网站
  • 网站建设市场调研网店网站设计
  • wordpress 显示大图seo排名啥意思