Home > Technology peripherals > AI > In-depth understanding of the core of Pytorch, the road to Tensor's breakthrough!

In-depth understanding of the core of Pytorch, the road to Tensor's breakthrough!

王林
Release: 2024-01-09 20:50:24
forward
963 people have browsed it

Today I will make a record of Pytorch’s tensor content.

At the same time, I hope I can provide you with some help!

Because the content shared today is definitely some examples of very useful information.

Let’s give a brief introduction first. In PyTorch, tensor is the core data structure. It is a multi-dimensional array, similar to the array in NumPy. Tensors are not only containers for storing data, but also the basis for various mathematical operations and deep learning operations.

The following is a summary from three aspects:

  • The concept of tensor
  • The principle of tensor
  • The operation of tensor

突破Pytorch核心,Tensor !!Picture

The concept of tensor

1. The definition of tensor

Tensor is a multi-dimensional An array, which can be a scalar (a zero-dimensional array), a vector (a one-dimensional array), a matrix (a two-dimensional array), or an array with a higher dimension.

In PyTorch, a tensor is an instance of torch.Tensor and can be created in different ways, such as directly from a Python list, a NumPy array, or through a specific function.

import torch# 创建一个标量scalar_tensor = torch.tensor(3.14)# 创建一个向量vector_tensor = torch.tensor([1, 2, 3])# 创建一个矩阵matrix_tensor = torch.tensor([[1, 2, 3], [4, 5, 6]])# 创建一个3D张量tensor_3d = torch.rand((2, 3, 4))# 2行3列4深度
Copy after login

2. Attributes of tensors

Each tensor has some important attributes, including shape (shape), data type (dtype) and device (device).

# 获取张量的形状shape = tensor_3d.shape# 获取张量的数据类型dtype = tensor_3d.dtype# 获取张量所在的设备device = tensor_3d.device
Copy after login

3. The shape of a tensor

The shape of a tensor defines its dimensions and the size in each dimension. For example, a tensor of shape (2, 3, 4) has 2 rows, 3 columns, and 4 depths. Shape is very important for understanding and manipulating tensors.

# 获取张量的形状shape = tensor_3d.shape# 改变张量的形状reshaped_tensor = tensor_3d.view(3, 8)# 将原始形状(2, 3, 4)变为(3, 8)
Copy after login

The principle of tensors

Tensors in PyTorch are implemented based on the Tensor class, which provides an abstraction of the underlying storage.

Tensors contain three main components:

  • storage
  • shape
  • stride

1. Storage

(Storage) Storage is the place where data is actually stored. It is a continuous memory area. Multiple tensors can share the same storage, reducing memory consumption. The data in storage is arranged according to the shape of the tensor.

# 获取张量的存储storage = tensor_3d.storage()
Copy after login

2. Shape

The shape of a tensor defines its dimensions and the size in each dimension. Shape information helps explain how data in storage is organized.

# 获取张量的形状shape = tensor_3d.shape
Copy after login

3. Stride

Stride refers to the number of steps required to move to the next element in storage. Understanding strides helps understand performance when indexing and slicing within tensors.

# 获取张量的步幅stride = tensor_3d.stride()
Copy after login

Tensor operations

PyTorch provides a wealth of tensor operations, including mathematical operations, logical operations, indexing, slicing, etc.

Here are the most common centralized operations:

1. Mathematical operations

# 加法result_add = tensor_3d + 2# 乘法result_mul = tensor_3d * 3# 矩阵乘法matrix_a = torch.rand((2, 3))matrix_b = torch.rand((3, 4))result_matmul = torch.mm(matrix_a, matrix_b)
Copy after login

2. Logical operations

# 大小比较result_compare = tensor_3d > 0.5# 逻辑运算result_logical = torch.logical_and(result_add, result_compare)
Copy after login

3. Indexing and slicing

# 索引element = tensor_3d[0, 1, 2]# 切片sliced_tensor = tensor_3d[:, 1:3, :]
Copy after login

4. Shape operation

# 改变形状reshaped_tensor = tensor_3d.view(3, 8)# 转置transposed_tensor = tensor_3d.transpose(0, 2)
Copy after login

5. Broadcast

Broadcasting is an operation that automatically expands tensors so that tensors with different shapes can be processed element by element. computation.

# 广播tensor_a = torch.rand((1, 3, 1))tensor_b = torch.rand((2, 1, 4))result_broadcast = tensor_a + tensor_b
Copy after login

Finally

Today I introduce the basic concepts, principles and common operations of tensors in PyTorch.

Tensors, as the basic data structure in deep learning, are very critical for understanding and implementing neural networks.

The above is the detailed content of In-depth understanding of the core of Pytorch, the road to Tensor's breakthrough!. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template