Home > Backend Development > Python Tutorial > How to implement gradient descent algorithm using Python?

How to implement gradient descent algorithm using Python?

PHPz
Release: 2023-09-19 14:55:54
Original
1454 people have browsed it

How to implement gradient descent algorithm using Python?

How to implement gradient descent algorithm using Python?

The gradient descent algorithm is a commonly used optimization algorithm that is widely used in machine learning and deep learning. The basic idea is to find the minimum point of the function through iteration, that is, to find the parameter value that minimizes the function error. In this article, we will learn how to implement the gradient descent algorithm in Python and give specific code examples.

The core idea of ​​the gradient descent algorithm is to iteratively optimize along the opposite direction of the function gradient, thereby gradually approaching the minimum point of the function. In practical applications, the gradient descent algorithm is divided into two variants: batch gradient descent (Batch Gradient Descent) and stochastic gradient descent (Stochastic Gradient Descent).

First, we introduce the implementation of the batch gradient descent algorithm. Suppose we want to minimize a univariate function f(x), where x is a variable. Using the gradient descent algorithm, we need to calculate the first derivative of the function f(x) with respect to x, that is, f'(x). This derivative represents the rate of change of the function at the current point. Then, we update the parameter x iteratively, that is, x = x - learning_rate * f'(x), where learning_rate is the learning rate, which is used to control the step size of each parameter update.

The following is a Python code example of the batch gradient descent algorithm:

def batch_gradient_descent(f, initial_x, learning_rate, num_iterations):
    x = initial_x
    for i in range(num_iterations):
        gradient = calculate_gradient(f, x)
        x = x - learning_rate * gradient
    return x

def calculate_gradient(f, x):
    h = 1e-9  # 求导的步长,可以根据函数的特点来调整
    return (f(x + h) - f(x - h)) / (2 * h)
Copy after login

In the above code, the batch_gradient_descent function receives four parameters: f is the function to be optimized, initial_x is the initial parameter value, learning_rate is the learning rate, and num_iterations is the number of iterations. The calculate_gradient function is used to calculate the gradient of function f at a certain point x.

Next, we introduce the implementation of the stochastic gradient descent algorithm. The difference between the stochastic gradient descent algorithm and the batch gradient descent algorithm is that only a portion of the data (a randomly selected portion of the samples) is used each time the parameters are updated. This approach is more computationally efficient on large-scale data sets, but may result in slower convergence.

The following is a Python code example of the stochastic gradient descent algorithm:

import random

def stochastic_gradient_descent(f, initial_x, learning_rate, num_iterations, batch_size):
    x = initial_x
    for i in range(num_iterations):
        batch = random.sample(train_data, batch_size)
        gradient = calculate_gradient(f, x, batch)
        x = x - learning_rate * gradient
    return x

def calculate_gradient(f, x, batch):
    gradient = 0
    for data in batch:
        x_val, y_val = data
        gradient += (f(x_val) - y_val) * x_val
    return gradient / len(batch)
Copy after login

In the above code, the stochastic_gradient_descent function receives five parameters: f is the function to be optimized, initial_x is the initial parameter value, learning_rate is the learning rate, num_iterations is the number of iterations, and batch_size is the number of samples used in each iteration. The calculate_gradient function calculates the gradient of a function f at a certain point x based on a randomly selected part of the sample.

To sum up, we introduced how to use Python to implement the gradient descent algorithm, and gave specific code examples of the batch gradient descent algorithm and the stochastic gradient descent algorithm. By properly selecting parameters such as learning rate, number of iterations, and number of samples, we can use the gradient descent algorithm to optimize various complex functions and improve the performance of machine learning and deep learning models.

The above is the detailed content of How to implement gradient descent algorithm using Python?. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template