Home > Backend Development > Python Tutorial > How to use PyTorch for neural network training

How to use PyTorch for neural network training

WBOY
Release: 2023-08-02 17:10:51
Original
1788 people have browsed it

How to use PyTorch for neural network training

Introduction:
PyTorch is an open source machine learning framework based on Python. Its flexibility and simplicity make it the first choice of many researchers and engineers. . This article will introduce you to how to use PyTorch for neural network training and provide corresponding code examples.

1. Install PyTorch
Before starting, you need to install PyTorch first. You can choose the version suitable for your operating system and hardware to install through the installation guide provided on the official website (https://pytorch.org/). Once installed, you can import the PyTorch library in Python and start writing code.

2. Build a neural network model
Before using PyTorch to train a neural network, you first need to build a suitable model. PyTorch provides a class called torch.nn.Module, which you can inherit to define your own neural network model.

The following is a simple example showing how to use PyTorch to build a neural network model containing two fully connected layers:

import torch
import torch.nn as nn

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(in_features=784, out_features=256)
        self.fc2 = nn.Linear(in_features=256, out_features=10)
    
    def forward(self, x):
        x = x.view(x.size(0), -1)
        x = self.fc1(x)
        x = torch.relu(x)
        x = self.fc2(x)
        return x

net = Net()
Copy after login

In the above code, we first define a name It is a Net class and inherits the torch.nn.Module class. In the __init__ method, we define two fully connected layers fc1 and fc2. Then, we define the process of forward propagation of data in the model through the forward method. Finally, we create an instance of Net.

3. Define the loss function and optimizer
Before training, we need to define the loss function and optimizer. PyTorch provides a rich selection of loss functions and optimizers, which can be selected according to specific circumstances.

Here is an example that shows how to define a training process using the cross-entropy loss function and the stochastic gradient descent optimizer:

loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.01)
Copy after login

In the above code, we will use the cross-entropy loss function and stochastic gradient descent optimizer are assigned to the loss_fn and optimizer variables respectively. net.parameters() indicates that we want to optimize all learnable parameters in the neural network model, and the lr parameter indicates the learning rate.

4. Prepare the data set
Before training the neural network, we need to prepare the training data set and the test data set. PyTorch provides some practical tool classes to help us load and preprocess data sets.

Here is an example showing how to load the MNIST handwritten digits dataset and preprocess it:

import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.5,), (0.5,)),
])

train_set = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
train_loader = torch.utils.data.DataLoader(train_set, batch_size=32, shuffle=True)

test_set = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=32, shuffle=False)
Copy after login

In the above code, we first define a transform Variables are used to preprocess data. We then loaded the MNIST dataset using the torchvision.datasets.MNIST class and specified the training dataset and testing using the train=True and train=False parameters data set. Finally, we use the torch.utils.data.DataLoader class to convert the dataset into an iterable data loader.

5. Start training
After preparing the data set, we can start training the neural network. In a training loop, we need to complete the following steps in sequence: input input data into the model, calculate the loss function, backpropagate the updated gradient, and optimize the model.

Here is an example showing how to use PyTorch for neural network training:

for epoch in range(epochs):
    running_loss = 0.0
    for i, data in enumerate(train_loader):
        inputs, labels = data
        
        optimizer.zero_grad()
        
        outputs = net(inputs)
        loss = loss_fn(outputs, labels)
        
        loss.backward()
        optimizer.step()
        
        running_loss += loss.item()
        
        if (i+1) % 100 == 0:
            print('[%d, %5d] loss: %.3f' % (epoch+1, i+1, running_loss/100))
            running_loss = 0.0
Copy after login

In the above code, we first iterate over the training data load using the enumerate function processor, got the input data and labels. We then zero out the gradients, feed the input data into the model, and compute the predictions and loss function. Next, we calculate the gradient through the backward method, and then update the model parameters through the step method. Finally, we accumulate the losses and print them as needed.

6. Test the model
After the training is completed, we still need to test the performance of the model. We can evaluate the performance of the model by calculating its accuracy on the test data set.

Here is an example that shows how to use PyTorch to test the accuracy of the model:

correct = 0
total = 0

with torch.no_grad():
    for data in test_loader:
        inputs, labels = data
        outputs = net(inputs)
        _, predicted = torch.max(outputs.data, 1)
        total += labels.size(0)
        correct += (predicted == labels).sum().item()

accuracy = 100 * correct / total
print('Accuracy: %.2f %%' % accuracy)
Copy after login

In the above code, we first define two variables correctand total, used to calculate the number of correctly classified samples and the total number of samples. Next, we use the torch.no_grad() context manager to turn off gradient calculations, thereby reducing memory consumption. Then, we sequentially calculate the prediction results, update the number of correctly classified samples and the total number of samples. Finally, the accuracy is calculated based on the number of correctly classified samples and the total number of samples and printed.

Summary:
Through the introduction of this article, you have understood the basic steps of how to use PyTorch for neural network training, and learned how to build a neural network model, define loss functions and optimizers, prepare data sets, Start training and testing the model. I hope this article will be helpful to your work and study in using PyTorch for neural network training.

References:

  1. PyTorch official website: https://pytorch.org/
  2. PyTorch documentation: https://pytorch.org/docs/stable /index.html

The above is the detailed content of How to use PyTorch for neural network training. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template