Home > Technology peripherals > AI > Machine Learning | PyTorch Concise Tutorial Part 1

Machine Learning | PyTorch Concise Tutorial Part 1

PHPz
Release: 2023-11-02 17:25:12
forward
665 people have browsed it

The previous articles introduced feature normalization and tensors. Next, I will write two concise tutorials on PyTorch, mainly introducing simple practices of PyTorch.

1. Four arithmetic operations

import torcha = torch.tensor([2, 3, 4])b = torch.tensor([3, 4, 5])print("a + b: ", (a + b).numpy())print("a - b: ", (a - b).numpy())print("a * b: ", (a * b).numpy())print("a / b: ", (a / b).numpy())
Copy after login

There is no need to explain addition, subtraction, multiplication and division. The output is:

a + b:[5 7 9]a - b:[-1 -1 -1]a * b:[ 6 12 20]a / b:[0.6666667 0.750.8]
Copy after login

2. Linear regression

Linear regression is found A straight line is as close as possible to the known point, as shown in the figure:

机器学习 | PyTorch简明教程上篇Figure 1

import torchfrom torch import optimdef build_model1():return torch.nn.Sequential(torch.nn.Linear(1, 1, bias=False))def build_model2():model = torch.nn.Sequential()model.add_module("linear", torch.nn.Linear(1, 1, bias=False))return modeldef train(model, loss, optimizer, x, y):model.train()optimizer.zero_grad()fx = model.forward(x.view(len(x), 1)).squeeze()output = loss.forward(fx, y)output.backward()optimizer.step()return output.item()def main():torch.manual_seed(42)X = torch.linspace(-1, 1, 101, requires_grad=False)Y = 2 * X + torch.randn(X.size()) * 0.33print("X: ", X.numpy(), ", Y: ", Y.numpy())model = build_model1()loss = torch.nn.MSELoss(reductinotallow='mean')optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)batch_size = 10for i in range(100):cost = 0.num_batches = len(X) // batch_sizefor k in range(num_batches):start, end = k * batch_size, (k + 1) * batch_sizecost += train(model, loss, optimizer, X[start:end], Y[start:end])print("Epoch = %d, cost = %s" % (i + 1, cost / num_batches))w = next(model.parameters()).dataprint("w = %.2f" % w.numpy())if __name__ == "__main__":main()
Copy after login

(1) Start with the main function, torch.manual_seed(42 ) is used to set the seed of the random number generator to ensure that the random number sequence generated is the same every time it is run. This function accepts an integer parameter as a seed and can be used in scenarios that require random numbers such as training neural networks to ensure Repeatability of results;

(2) torch.linspace(-1, 1, 101, requires_grad=False) is used to generate a set of equally spaced values ​​within a specified interval. This function accepts three Parameters: starting value, ending value and number of elements, returning a tensor containing the specified number of equally spaced values;

(3) Internal implementation of build_model1:

  • torch.nn.Sequential(torch.nn.Linear(1, 1, bias=False)) uses the constructor of the nn.Sequential class, passes the linear layer to it as a parameter, and then returns a Neural network model;
  • build_model2 has the same function as build_model1, using the add_module() method to add a submodule named linear to it;

(4) torch.nn.MSELoss (reductinotallow='mean') defines the loss function;

Use optim.SGD(model.parameters(), lr=0.01, momentum=0.9) to implement the Stochastic Gradient Descent (SGD) optimization algorithm

Split the training set by batch size and loop 100 times

(7) Next is the training function train, which is used to train a neural network model. Specifically, this function accepts the following Parameters:

  • model: neural network model, usually an instance of a class inherited from nn.Module;
  • loss: loss function, used to calculate the predicted value of the model and the true value The difference between values;
  • optimizer: optimizer, used to update the parameters of the model;
  • x: input data, which is a tensor of torch.Tensor type;
  • y: The target data is a tensor of torch.Tensor type;

(8) train is a commonly used method in the PyTorch training process. The steps are as follows:

  • Set the model to training mode, that is, enable special operations used during training such as dropout and batch normalization;
  • Clear the gradient cache of the optimizer for a new round of gradient calculation;
  • Pass the input data to the model, calculate the predicted value of the model, and pass the predicted value and target data to the loss function to calculate the loss value;
  • Backpropagate the loss value and calculate the gradient of the model parameters ;
  • Use the optimizer to update the model parameters to minimize the loss value;
  • Return the scalar value of the loss value;

(9)print("Round times = %d, loss value = %s" % (i 1, cost / num_batches)) Finally, print the current training round and loss value. The above code output is as follows:

...Epoch = 95, cost = 0.10514946877956391Epoch = 96, cost = 0.10514946877956391Epoch = 97, cost = 0.10514946877956391Epoch = 98, cost = 0.10514946877956391Epoch = 99, cost = 0.10514946877956391Epoch = 100, cost = 0.10514946877956391w = 1.98
Copy after login

3. Logistic regression

Logistic regression uses a curve to approximately represent the trajectory of a bunch of discrete points. As shown in the figure:

机器学习 | PyTorch简明教程上篇Figure 2

import numpy as npimport torchfrom torch import optimfrom data_util import load_mnistdef build_model(input_dim, output_dim):return torch.nn.Sequential(torch.nn.Linear(input_dim, output_dim, bias=False))def train(model, loss, optimizer, x_val, y_val):model.train()optimizer.zero_grad()fx = model.forward(x_val)output = loss.forward(fx, y_val)output.backward()optimizer.step()return output.item()def predict(model, x_val):model.eval()output = model.forward(x_val)return output.data.numpy().argmax(axis=1)def main():torch.manual_seed(42)trX, teX, trY, teY = load_mnist(notallow=False)trX = torch.from_numpy(trX).float()teX = torch.from_numpy(teX).float()trY = torch.tensor(trY)n_examples, n_features = trX.size()n_classes = 10model = build_model(n_features, n_classes)loss = torch.nn.CrossEntropyLoss(reductinotallow='mean')optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)batch_size = 100for i in range(100):cost = 0.num_batches = n_examples // batch_sizefor k in range(num_batches):start, end = k * batch_size, (k + 1) * batch_sizecost += train(model, loss, optimizer,trX[start:end], trY[start:end])predY = predict(model, teX)print("Epoch %d, cost = %f, acc = %.2f%%"% (i + 1, cost / num_batches, 100. * np.mean(predY == teY)))if __name__ == "__main__":main()
Copy after login

(1) Start with the main function, which is introduced above torch.manual_seed(42), skip it here ;

(2) load_mnist is its own implementation of downloading the mnist data set, returning trX and teX as input data, trY and teY as label data;

(3) internal implementation of build_model: torch.nn .Sequential(torch.nn.Linear(input_dim, output_dim, bias=False)) is used to build a neural network model containing a linear layer. The number of input features of the model is input_dim, the number of output features is output_dim, and the linear layer has no Bias term, where n_classes=10 means outputting 10 categories; After rewriting: (3) Internal implementation of build_model: Use torch.nn.Sequential(torch.nn.Linear(input_dim, output_dim, bias=False)) to build a neural network model containing a linear layer. The number of input features of the model is input_dim. The number of output features is output_dim, and the linear layer has no bias term. Among them, n_classes=10 means outputting 10 categories;

(4) The other steps are to define the loss function, gradient descent optimizer, split the training set through batch_size, and loop 100 times for train;

Use optim.SGD(model.parameters(), lr=0.01, momentum=0.9) to implement the Stochastic Gradient Descent (SGD) optimization algorithm

(6) At the end of each round of training Finally, the predict function needs to be executed to make predictions. This function accepts two parameters model (the trained model) and teX (the data that needs to be predicted). Specific steps are as follows:

  • model.eval()模型设置为评估模式,这意味着模型将不会进行训练,而是仅用于推理;
  • 将output转换为NumPy数组,并使用argmax()方法获取每个样本的预测类别;

(7)print("Epoch %d, cost = %f, acc = %.2f%%" % (i + 1, cost / num_batches, 100. * np.mean(predY == teY)))最后打印当前训练的轮次,损失值和acc,上述的代码输出如下(执行很快,但是准确率偏低):

...Epoch 91, cost = 0.252863, acc = 92.52%Epoch 92, cost = 0.252717, acc = 92.51%Epoch 93, cost = 0.252573, acc = 92.50%Epoch 94, cost = 0.252431, acc = 92.50%Epoch 95, cost = 0.252291, acc = 92.52%Epoch 96, cost = 0.252153, acc = 92.52%Epoch 97, cost = 0.252016, acc = 92.51%Epoch 98, cost = 0.251882, acc = 92.51%Epoch 99, cost = 0.251749, acc = 92.51%Epoch 100, cost = 0.251617, acc = 92.51%
Copy after login

4、神经网络

一个经典的LeNet网络,用于对字符进行分类,如图:

机器学习 | PyTorch简明教程上篇图3

  • 定义一个多层的神经网络
  • 对数据集的预处理并准备作为网络的输入
  • 将数据输入到网络
  • 计算网络的损失
  • 反向传播,计算梯度
import numpy as npimport torchfrom torch import optimfrom data_util import load_mnistdef build_model(input_dim, output_dim):return torch.nn.Sequential(torch.nn.Linear(input_dim, 512, bias=False),torch.nn.Sigmoid(),torch.nn.Linear(512, output_dim, bias=False))def train(model, loss, optimizer, x_val, y_val):model.train()optimizer.zero_grad()fx = model.forward(x_val)output = loss.forward(fx, y_val)output.backward()optimizer.step()return output.item()def predict(model, x_val):model.eval()output = model.forward(x_val)return output.data.numpy().argmax(axis=1)def main():torch.manual_seed(42)trX, teX, trY, teY = load_mnist(notallow=False)trX = torch.from_numpy(trX).float()teX = torch.from_numpy(teX).float()trY = torch.tensor(trY)n_examples, n_features = trX.size()n_classes = 10model = build_model(n_features, n_classes)loss = torch.nn.CrossEntropyLoss(reductinotallow='mean')optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)batch_size = 100for i in range(100):cost = 0.num_batches = n_examples // batch_sizefor k in range(num_batches):start, end = k * batch_size, (k + 1) * batch_sizecost += train(model, loss, optimizer,trX[start:end], trY[start:end])predY = predict(model, teX)print("Epoch %d, cost = %f, acc = %.2f%%"% (i + 1, cost / num_batches, 100. * np.mean(predY == teY)))if __name__ == "__main__":main()
Copy after login

(1)以上这段神经网络的代码与逻辑回归没有太多的差异,区别的地方是build_model,这里是构建一个包含两个线性层和一个Sigmoid激活函数的神经网络模型,该模型包含一个输入特征数量为input_dim,输出特征数量为output_dim的线性层,一个Sigmoid激活函数,以及一个输入特征数量为512,输出特征数量为output_dim的线性层;

(2)print("Epoch %d, cost = %f, acc = %.2f%%" % (i + 1, cost / num_batches, 100. * np.mean(predY == teY)))最后打印当前训练的轮次,损失值和acc,上述的代码输入如下(执行时间比逻辑回归要长,但是准确率要高很多):

第91个时期,费用= 0.054484,准确率= 97.58%第92个时期,费用= 0.053753,准确率= 97.56%第93个时期,费用= 0.053036,准确率= 97.60%第94个时期,费用= 0.052332,准确率= 97.61%第95个时期,费用= 0.051641,准确率= 97.63%第96个时期,费用= 0.050964,准确率= 97.66%第97个时期,费用= 0.050298,准确率= 97.66%第98个时期,费用= 0.049645,准确率= 97.67%第99个时期,费用= 0.049003,准确率= 97.67%第100个时期,费用= 0.048373,准确率= 97.68%
Copy after login


The above is the detailed content of Machine Learning | PyTorch Concise Tutorial Part 1. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template