东营市城乡建设局网站,在线网站建设课程,广东手机网站建设品牌,广告设计培训班分类目录#xff1a;《深入浅出PaddlePaddle函数》总目录 相关文章#xff1a; 深入浅出PaddlePaddle函数——paddle.Tensor 深入浅出PaddlePaddle函数——paddle.to_tensor 通过已知的data来创建一个Tensor#xff0c;Tensor类型为paddle.Tensor。data可以是scalar、tupl…分类目录《深入浅出PaddlePaddle函数》总目录 相关文章 · 深入浅出PaddlePaddle函数——paddle.Tensor · 深入浅出PaddlePaddle函数——paddle.to_tensor 通过已知的data来创建一个TensorTensor类型为paddle.Tensor。data可以是scalar、tuple、list、numpy.ndarray、paddle.Tensor。如果data已经是一个Tensor且dtype 、 place没有发生变化将不会发生Tensor的拷贝并返回原来的Tensor。 否则会创建一个新的 Tensor且不保留原来计算图。
语法
paddle.to_tensor(data, dtypeNone, placeNone, stop_gradientTrue)参数
data[scalar/tuple/list/ndarray/Tensor] 初始化Tensor的数据可以是scalar、tuple、list、numpy.ndarray、paddle.Tensor类型。dtype[可选str] 创建Tensor的数据类型可以是bool、float16、float32、float64、int8、int16、int32、int64、uint8、complex64、complex128。 默认值为None如果 data为 python 浮点类型则从get_default_dtype获取类型如果data为其他类型则会自动推导类型。place[可选, CPUPlace/CUDAPinnedPlace/CUDAPlace] 创建Tensor的设备位置可以是 CPUPlace、CUDAPinnedPlace、CUDAPlace。默认值为None使用全局的place。stop_gradient [可选bool] 是否阻断Autograd的梯度传导。默认值为True此时不进行梯度传传导。
返回值
通过data创建的 Tensor。
实例
import paddletype(paddle.to_tensor(1))
# class paddle.Tensorpaddle.to_tensor(1)
# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue,
# [1])x paddle.to_tensor(1, stop_gradientFalse)
print(x)
# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientFalse,
# [1])paddle.to_tensor(x) # A new tensor will be created with default stop_gradientTrue
# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue,
# [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], placepaddle.CPUPlace(), stop_gradientFalse)
# Tensor(shape[2, 2], dtypefloat32, placeCPUPlace, stop_gradientFalse,
# [[0.10000000, 0.20000000],
# [0.30000001, 0.40000001]])type(paddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64))
# class paddle.Tensorpaddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64)
# Tensor(shape[2, 2], dtypecomplex64, placeCPUPlace, stop_gradientTrue,
# [[(11j), (20j)],
# [(32j), (40j)]])函数实现
def to_tensor(data, dtypeNone, placeNone, stop_gradientTrue):rConstructs a paddle.Tensor from data ,which can be scalar, tuple, list, numpy\.ndarray, paddle\.Tensor.If the data is already a Tensor, copy will be performed and return a new tensor.If you only want to change stop_gradient property, please call Tensor.stop_gradient stop_gradient directly.Args:data(scalar|tuple|list|ndarray|Tensor): Initial data for the tensor.Can be a scalar, list, tuple, numpy\.ndarray, paddle\.Tensor.dtype(str|np.dtype, optional): The desired data type of returned tensor. Can be bool , float16 ,float32 , float64 , int8 , int16 , int32 , int64 , uint8,complex64 , complex128. Default: None, infers dtype from dataexcept for python float number which gets dtype from get_default_type .place(CPUPlace|CUDAPinnedPlace|CUDAPlace|str, optional): The place to allocate Tensor. Can beCPUPlace, CUDAPinnedPlace, CUDAPlace. Default: None, means global place. If place isstring, It can be cpu, gpu:x and gpu_pinned, where x is the index of the GPUs.stop_gradient(bool, optional): Whether to block the gradient propagation of Autograd. Default: True.Returns:Tensor: A Tensor constructed from data .Examples:.. code-block:: pythonimport paddletype(paddle.to_tensor(1))# class paddle.Tensorpaddle.to_tensor(1)# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue,# [1])x paddle.to_tensor(1, stop_gradientFalse)print(x)# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientFalse,# [1])paddle.to_tensor(x) # A new tensor will be created with default stop_gradientTrue# Tensor(shape[1], dtypeint64, placeCPUPlace, stop_gradientTrue,# [1])paddle.to_tensor([[0.1, 0.2], [0.3, 0.4]], placepaddle.CPUPlace(), stop_gradientFalse)# Tensor(shape[2, 2], dtypefloat32, placeCPUPlace, stop_gradientFalse,# [[0.10000000, 0.20000000],# [0.30000001, 0.40000001]])type(paddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64))# class paddle.Tensorpaddle.to_tensor([[11j, 2], [32j, 4]], dtypecomplex64)# Tensor(shape[2, 2], dtypecomplex64, placeCPUPlace, stop_gradientTrue,# [[(11j), (20j)],# [(32j), (40j)]])place _get_paddle_place(place)if place is None:place _current_expected_place()if _non_static_mode():return _to_tensor_non_static(data, dtype, place, stop_gradient)# call assign for static graphelse:re_exp re.compile(r[(](.?)[)], re.S)place_str re.findall(re_exp, str(place))[0]with paddle.static.device_guard(place_str):return _to_tensor_static(data, dtype, stop_gradient)def full_like(x, fill_value, dtypeNone, nameNone):This function creates a tensor filled with fill_value which has identical shape of x and dtype.If the dtype is None, the data type of Tensor is same with x.Args:x(Tensor): The input tensor which specifies shape and data type. The data type can be bool, float16, float32, float64, int32, int64.fill_value(bool|float|int): The value to fill the tensor with. Note: this value shouldnt exceed the range of the output data type.dtype(np.dtype|str, optional): The data type of output. The data type can be oneof bool, float16, float32, float64, int32, int64. The default value is None, which means the outputdata type is the same as input.name(str, optional): For details, please refer to :ref:api_guide_Name. Generally, no setting is required. Default: None.Returns:Tensor: Tensor which is created according to x, fill_value and dtype.Examples:.. code-block:: pythonimport paddleinput paddle.full(shape[2, 3], fill_value0.0, dtypefloat32, nameinput)output paddle.full_like(input, 2.0)# [[2. 2. 2.]# [2. 2. 2.]]if dtype is None:dtype x.dtypeelse:if not isinstance(dtype, core.VarDesc.VarType):dtype convert_np_dtype_to_dtype_(dtype)if in_dygraph_mode():return _C_ops.full_like(x, fill_value, dtype, x.place)if _in_legacy_dygraph():return _legacy_C_ops.fill_any_like(x, value, fill_value, dtype, dtype)helper LayerHelper(full_like, **locals())check_variable_and_dtype(x,x,[bool, float16, float32, float64, int16, int32, int64],full_like,)check_dtype(dtype,dtype,[bool, float16, float32, float64, int16, int32, int64],full_like/zeros_like/ones_like,)out helper.create_variable_for_type_inference(dtypedtype)helper.append_op(typefill_any_like,inputs{X: [x]},attrs{value: fill_value, dtype: dtype},outputs{Out: [out]},)out.stop_gradient Truereturn out