Home > Technology peripherals > AI > body text

What is regularization in machine learning?

王林
Release: 2023-11-06 11:25:01
forward
891 people have browsed it

1. Introduction

In the field of machine learning, relevant models may become overfitting and underfitting during the training process. To prevent this from happening, we use regularization operations in machine learning to properly fit the model on our test set. Generally speaking, regularization operations help everyone obtain the best model by reducing the possibility of overfitting and underfitting.

In this article, we will understand what is regularization, types of regularization. Additionally, we will discuss related concepts such as bias, variance, underfitting, and overfitting.

Let’s stop talking nonsense and get started!

2. Bias and Variance

Bias and Variance are used to describe the model we learned and the real model The two aspects of the gap

need to be rewritten: the definitions of the two are as follows:

  • Bias is to use all The difference between the average of the outputs of all models trained on the possible training data set and the output value of the true model.
  • Variance is the difference between the output values ​​of the models trained on different training data sets.

What is regularization in machine learning?

Bias reduces the sensitivity of the model to individual data points while increasing the generalization of the data , reducing the sensitivity of the model to isolated data points. Training time can also be reduced since the required functionality is less complex. High bias indicates that the target function is assumed to be more reliable, but sometimes results in underfitting the model

Variance (Variance) in machine learning refers to the sensitivity of the model to small changes in the data set. mistake. Since there is significant variation in the data set, the algorithm models noise and outliers in the training set. This situation is often called overfitting. When evaluated on a new data set, since the model essentially learns every data point, it cannot provide accurate predictions

A relatively balanced model will have low bias and low variance, while high bias and high variance will lead to underfitting and overfitting.

3. Underfitting

Underfitting occurs when the model cannot correctly learn the patterns in the training data and generalize to new data. . Underfitting models perform poorly on training data and can lead to incorrect predictions. When high bias and low variance occur, underfitting is prone to occur

What is regularization in machine learning?


# #4. Overfitting

When a model performs very well on training data, but performs poorly on test data, it is called overfitting (new data). In this case, the machine learning model is fitted to the noise in the training data, which negatively affects the model's performance on the test data. Low bias and high variance can lead to overfitting.

What is regularization in machine learning?


##5. Regularization concept

The term "regular "" describes methods for calibrating machine learning models to reduce the adjusted loss function and avoid overfitting or underfitting.

What is regularization in machine learning?


By using regularization techniques, we can make machine learning models more accurate Fit to a specific test set effectively, thereby effectively reducing the error in the test set

6. L1 regularization

Compared with collar regression, the implementation of L1 regularization is mainly to add a penalty term to the loss function. The penalty value of this term is the sum of the absolute values ​​of all coefficients, as follows:

What is regularization in machine learning?


In the Lasso regression model, the penalty is increased by increasing the absolute value of the regression coefficient in a manner similar to ridge regression item to achieve. In addition, L1 regularization has good performance in improving the accuracy of linear regression models. At the same time, since L1 regularization penalizes all parameters equally, it can make some weights become zero, thus producing a sparse model that can remove certain features (a weight of 0 is equivalent to removal).

7. L2 regularization

L2 regularization is also achieved by adding a penalty term to the loss function. Realize that the penalty term is equal to the sum of the squares of all coefficients. As follows:

What is regularization in machine learning?

Generally speaking, it is considered a method to adopt when the data exhibits multicollinearity (independent variables are highly correlated). Although least squares estimates (OLS) in multicollinearity are unbiased, their large variances can cause observed values ​​to differ significantly from actual values. L2 reduces the error of regression estimates to a certain extent. It usually uses shrinkage parameters to solve multicollinearity problems. L2 regularization reduces the fixed proportion of weights and smoothes the weights.

8. Summary

After the above analysis, the relevant regularization knowledge in this article is summarized as follows:

L1 regularization can generate a sparse weight matrix, that is, a sparse model, which can be used for feature selection;

L2 regularization can prevent model overfitting. To a certain extent, L1 can also prevent overfitting and improve the generalization ability of the model;

L1 (Lagrangian) regularization assumes that the prior distribution of parameters is the Laplace distribution, which can ensure the sparsity of the model. That is, some parameters are equal to 0;

The assumption of L2 (ridge regression) is that the prior distribution of the parameters is a Gaussian distribution, which can ensure the stability of the model, that is, the values ​​of the parameters will not be too large or too small.

In practical applications, if the features are high-dimensional and sparse, L1 regularization should be used; if the features are low-dimensional and dense, L2 regularization should be used

The above is the detailed content of What is regularization in machine learning?. For more information, please follow other related articles on the PHP Chinese website!

source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template