Units, also known as nodes or neurons, are the core of neural networks. Each unit receives one or more inputs, multiplies each input by a weight, and then adds the weighted inputs to the bias value. Next, this value is fed into the activation function. In a neural network, the output of a unit can be sent to other neurons.
Multilayer perceptron, also known as feedforward neural network, is currently the most widely used and simplest artificial neural network model. It consists of multiple layers connected to each other, each layer connecting input features with target values. This network structure is called "feedforward" because the input feature values are passed in a "forward" manner through the network, and each layer transforms the feature values until the final output is consistent with the target output.
In a feedforward neural network, there are three types of layers. Each unit contains observations of a single feature in the input layer. If there are 100 feature observations, then the input layer will have 100 nodes. The output layer converts the output of the hidden layer into useful values for the neural network. To achieve binary classification, we can use the sigmoid function in the output layer to scale the output to a class probability of 0 or 1. The hidden layer is located between the input layer and the output layer and is responsible for processing the feature values from the input layer. Finally, the output layer converts them into values similar to the target class.
The parameters of a neural network are usually initialized to small random values, which can come from a Gaussian distribution or a normal uniform distribution. The loss function is used to measure the difference between the output value of the network and the observed value, compared with the true value after being fed through the network. The forward propagation algorithm is used to determine which parameters contribute most to the difference between predicted and true values. Through the optimization algorithm, each weight is adjusted according to the determined size.
The neural network learns each observation in the training data through multiple iterations of forward propagation and back propagation. The number of times each observation is sent through the network is called an epoch, and typically training consists of multiple epochs in order to iteratively adjust parameters.
The above is the detailed content of A preliminary understanding of neural networks. For more information, please follow other related articles on the PHP Chinese website!