This article mainly introduces examples of using TensorFlow to implement lasso regression and ridge regression algorithms. It has certain reference value. Now I share it with you. Friends in need can refer to it.
There are also some regular methods You can limit the influence of coefficients in the output results of regression algorithms. The two most commonly used regularization methods are lasso regression and ridge regression.
The lasso regression and ridge regression algorithms are very similar to the conventional linear regression algorithm. The one difference is that a regular term is added to the formula to limit the slope (or net slope). The main reason for doing this is to limit the impact of the feature on the dependent variable, which is achieved by adding a loss function that depends on the slope A.
For the lasso regression algorithm, add an item to the loss function: a given multiple of the slope A. We use TensorFlow's logical operations, but without the gradients associated with these operations, instead we use a continuous estimate of a step function, also called a continuous step function, which jumps and expands at a cutoff point. You'll see how to use the lasso regression algorithm in a moment.
For the ridge regression algorithm, add an L2 norm, which is the L2 regularization of the slope coefficient.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 |
|
Output result:
Step #300 A = [[ 0.77170753]] b = [[ 1.82499862]]
Loss = [[ 10.26473045]]
Step #600 A = [[ 0.75908542]] b = [[ 3.2220633]]
Loss = [[ 3.06292033]]
Step #900 A = [[ 0.74843585] ] b = [[ 3.9975822]]
Loss = [[ 1.23220456]]
Step #1200 A = [[ 0.73752165]] b = [[ 4.42974091]]
Loss = [[ 0.57872057]]
Step #1500 A = [[ 0.72942668]] b = [[ 4.67253113]]
Loss = [[ 0.40874988]]
The lasso regression algorithm is implemented by adding a continuous step function based on the standard linear regression estimation. Due to the slope of the step function, we need to pay attention to the step size, as a step size that is too large will result in eventual non-convergence.
Related recommendations:
Example of using TensorFlow to implement Deming regression algorithm
The above is the detailed content of Examples of implementing lasso regression and ridge regression algorithms with TensorFlow. For more information, please follow other related articles on the PHP Chinese website!