Recurrent neural networks (RNNs) are a powerful type of artificial neural network (ANN) used in applications like Apple's Siri and Google's voice search. Their unique ability to retain past inputs through internal memory makes them ideal for tasks such as stock price prediction, text generation, transcription, and machine translation. Unlike traditional neural networks where inputs and outputs are independent, RNN outputs depend on previous elements in a sequence. Furthermore, RNNs share parameters across network layers, optimizing weight and bias adjustments during gradient descent.
The diagram above illustrates a basic RNN. In a stock price forecasting scenario using data like [45, 56, 45, 49, 50,...], each input (X0 to Xt) incorporates past values. For instance, X0 would be 45, X1 would be 56, and these values contribute to predicting the next sequence element.
In RNNs, information cycles through a loop, making the output a function of both the current and previous inputs.
The input layer (X) processes the initial input, passing it to the middle layer (A), which comprises multiple hidden layers with activation functions, weights, and biases. These parameters are shared across the hidden layer, creating a single looped layer instead of multiple distinct layers. RNNs employ backpropagation through time (BPTT) instead of traditional backpropagation to compute gradients. BPTT sums errors at each time step due to the shared parameters.
RNNs offer flexibility in input and output lengths, unlike feedforward networks with single inputs and outputs. This adaptability allows RNNs to handle diverse tasks, including music generation, sentiment analysis, and machine translation. Four main types exist:
Learn more about RNNs for language modeling
Explore Introduction to Deep Learning in Python
Convolutional neural networks (CNNs) are feedforward networks processing spatial data (like images), commonly used in computer vision. Simple neural networks struggle with image pixel dependencies, while CNNs, with their convolutional, ReLU, pooling, and fully connected layers, excel in this area.
Key Differences:
Simple RNNs face two primary challenges related to gradients:
Solutions include reducing hidden layers or using advanced architectures like LSTM and GRU.
Simple RNNs suffer from short-term memory limitations. LSTM and GRU address this by enabling the retention of information over extended periods.
This section details a project using LSTM and GRU to predict MasterCard stock prices. The code utilizes libraries like Pandas, NumPy, Matplotlib, scikit-learn, and TensorFlow.
(The detailed code example from the original input is omitted here for brevity. The core steps are summarized below.)
MinMaxScaler
, and reshape for model input.Hybrid CNN-RNN networks are increasingly used for tasks requiring both spatial and temporal understanding. This tutorial provided a foundational understanding of RNNs, their limitations, and solutions offered by advanced architectures like LSTM and GRU. The project demonstrated the application of LSTM and GRU for stock price prediction, highlighting GRU's superior performance in this specific case. The complete project is available on the DataCamp workspace.
Remember to replace https://www.php.cn/link/cc6a6632b380f3f6a1c54b1222cd96c2
and https://www.php.cn/link/8708107b2ff5de15d0244471ae041fdb
with actual links to the relevant courses. The image URLs are assumed to be correct and accessible.
The above is the detailed content of Recurrent Neural Network Tutorial (RNN). For more information, please follow other related articles on the PHP Chinese website!