TensorFlow implements Softmax Regression (regression) to recognize handwritten digits. MNIST (Mixed National Institute of Standards and Technology database), a simple machine vision data set, 28X28 pixel handwritten digits, only grayscale value information, the blank part is 0, the handwriting is taken from [0, 1] according to the color depth, 784 dimensions, discarding two Dimensional space information, targets are divided into 10 categories from 0 to 9. Data loading, data.read_data_sets, 55,000 samples, test set 10,000 samples, validation set 5,000 samples. Sample annotation information, label, 10-dimensional vector, 10 types of one-hot encoding. The training set trains the model, the validation set tests the effect, and the test set evaluates the model (accuracy, recall, F1-score).
Algorithm design, Softmax Regression trains the handwritten digit recognition classification model, estimates the category probability, and takes the maximum probability number as the model output result. Class features are added to determine the class probability. Model learning training adjusts the weights. Softmax, various feature calculation exp functions, standardized (the output probability value of all categories is 1). y = softmax(Wx+b).
NumPy uses C, fortran, and calls openblas and mkl matrix operation libraries. TensorFlow's dense and complex operations are performed outside of Python. Define the calculation graph. The calculation operation does not need to transfer the calculated data back to Python every time. All operations are run outside Python.
import tensor flow as tf, load the TensorFlow library. less = tf.InteractiveSession(), creates an InteractiveSession and registers it as the default session. The data and operations of different sessions are independent of each other. x = tf.placeholder(tf.float32, [None,784]), create a Placeholder to receive input data, the first parameter is the data type, and the second parameter represents the tensor shape data size. None There is no limit to the number of inputs, each input is a 784-dimensional vector.
Tensor stores data and will disappear once used. Variable is persistent in model training iterations, exists for a long time, and is updated in each iteration. The Variable object weights and biases of the Softmax Regression model are initialized to 0. Model training automatically learns appropriate values. For complex networks, the initialization method is important. w = tf.Variable(tf.zeros([784, 10])), 784 feature dimensions, 10 categories. Label, a 10-dimensional vector after one-hot encoding.
Softmax Regression algorithm, y = tf.nn.softmax(tf.matmul(x, W) + b). tf.nn contains a large number of neural network components. tf.matmul, matrix multiplication function. TensorFlow automatically implements forward and backward content. As long as the loss is defined, the training will automatically derive gradient descent and complete automatic learning of Softmax Regression model parameters.
Define loss function to describe the classification accuracy of the problem model. The smaller the Loss, the smaller the model classification result is compared with the true value and the more accurate it is. The initial parameters of the model are all zero, resulting in an initial loss. The training goal is to reduce loss and find the global optimal or local optimal solution. Cross-entropy, loss function is commonly used in classification problems. y predicted probability distribution, y' true probability distribution (Label one-hot encoding), determine the accuracy of the model's prediction of the true probability distribution. cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1])). Define the placeholder and enter the real label. tf.reduce_sum calculates the sum, and tf.reduce_mean calculates the average of each batch data result.
Define the optimization algorithm, stochastic gradient descent SGD (Stochastic Gradient Descent). Automatic derivation based on the calculation graph, training based on the Back Propagation algorithm, and iteratively updating parameters in each round to reduce loss. An encapsulated optimizer is provided to iterate the feed data in each round. TensorFlow automatically supplements the operation in the background to implement backpropagation and gradient descent. train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy). Call tf.train.GradientDescentOptimizer, set the learning speed to 0.5, set the optimization target cross-entropy, and get the training operation train_step.
tf.global_variables_initializer().run(). TensorFlow global parameter initializer tf.golbal_variables_initializer.
batch_xs,batch_ys = mnist.train.next_batch(100). Training operation train_step. Each time, 100 samples are randomly selected from the training set to form a mini-batch, fed to the placeholder, and train_step training samples are called. Using a small number of samples for training, stochastic gradient descent leads to faster convergence. All samples are trained each time, which requires a large amount of calculation and makes it difficult to jump out of the local optimum.
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmzx(y_,1)), verify the model accuracy. tf.argmax finds the maximum value sequence number from tensor, tf.argmax(y,1) finds the maximum predicted number probability, and tf.argmax(y_,1) finds the true number category of the sample. tf.equal determines whether the predicted number category is correct and returns whether the calculation classification operation is correct.
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)), counts the prediction accuracy of all samples. tf.cast converts correct_prediction output value type.
print(accuracy.eval({x: mnist.test.images,y_: mnist.test.labels})). Test data characteristics, Label input evaluation process, and calculate the accuracy of the model test set. Softmax Regression MNIST data classification and recognition, the average accuracy of the test set is about 92%.
TensorFlow implements simple machine algorithm steps:
1、Define algorithm formula and neural network forward calculation.
2、Define loss, select the optimizer, and specify the optimizer to optimize the loss.
3、Iterative training data.
4、Evaluation accuracy of test set and verification set.
The defined formula is just the Computation Graph. The calculation is only executed when the run method and feed data are called.
<span style="color: #0000ff">from</span> tensorflow.examples.tutorials.mnist <span style="color: #0000ff">import</span><span style="color: #000000"> input_data mnist </span>= input_data.read_data_sets(<span style="color: #800000">"</span><span style="color: #800000">MNIST_data/</span><span style="color: #800000">"</span>, one_hot=<span style="color: #000000">True) </span><span style="color: #0000ff">print</span><span style="color: #000000">(mnist.train.images.shape, mnist.train.labels.shape) </span><span style="color: #0000ff">print</span><span style="color: #000000">(mnist.test.images.shape, mnist.test.labels.shape) </span><span style="color: #0000ff">print</span><span style="color: #000000">(mnist.validation.images.shape, mnist.validation.labels.shape) </span><span style="color: #0000ff">import</span><span style="color: #000000"> tensorflow as tf sess </span>=<span style="color: #000000"> tf.InteractiveSession() x </span>= tf.placeholder(tf.float32, [None, 784<span style="color: #000000">]) W </span>= tf.Variable(tf.zeros([784, 10<span style="color: #000000">])) b </span>= tf.Variable(tf.zeros([10<span style="color: #000000">])) y </span>= tf.nn.softmax(tf.matmul(x, W) +<span style="color: #000000"> b) y_ </span>= tf.placeholder(tf.float32, [None, 10<span style="color: #000000">]) cross_entropy </span>= tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1<span style="color: #000000">])) train_step </span>= tf.train.GradientDescentOptimizer(0.5<span style="color: #000000">).minimize(cross_entropy) tf.global_variables_initializer().run() </span><span style="color: #0000ff">for</span> i <span style="color: #0000ff">in</span> range(1000<span style="color: #000000">): batch_xs, batch_ys </span>= mnist.train.next_batch(100<span style="color: #000000">) train_step.run({x: batch_xs, y_: batch_ys}) correct_prediction </span>= tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1<span style="color: #000000">)) accuracy </span>=<span style="color: #000000"> tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) </span><span style="color: #0000ff">print</span>(accuracy.eval({x: mnist.test.images, y_: mnist.test.labels}))
Reference materials:
"TensorFlow Practice"
Welcome to pay for consultation (150 yuan per hour), my WeChat: qingxingfengzi
The above is the detailed content of Study notes TF024: TensorFlow implements Softmax Regression (regression) to recognize handwritten digits. For more information, please follow other related articles on the PHP Chinese website!