What is the Transformer machine learning model?
Translator | Li Rui
Reviewer | Sun Shujuan
In recent years, the Transformer machine learning model has become one of the main highlights of the advancement of deep learning and deep neural network technology. It is mainly used for advanced applications in natural language processing. Google is using it to enhance its search engine results. OpenAI used Transformer to create the famous GPT-2 and GPT-3 models.
Since its debut in 2017, the Transformer architecture has continued to evolve and expand into many different variants, extending from language tasks to other domains. They have been used for time series forecasting. They are the key innovation behind AlphaFold, DeepMind’s protein structure prediction model. OpenAI’s source code generation model Codex is also based on Transformer. Transformers have also recently entered the field of computer vision, where they are slowly replacing convolutional neural networks (CNN) in many complex tasks.
Researchers are still exploring ways to improve Transformer and use it in new applications. Here’s a quick explanation of what makes Transformers exciting and how they work.
1. Use neural network to process sequence data
Traditional feedforward neural networks are not designed to track sequential data and map each input to an output. It works well for tasks like image classification, but fails on sequence data like text. Machine learning models that process text must not only process each word, but also consider how words are arranged in order and related to each other. And the meaning of a word may change depending on the other words that appear before and after them in the sentence.
Before the advent of Transformer, Recurrent Neural Networks (RNN) were the preferred solution for natural language processing. When given a sequence of words, a Recurrent Neural Network (RNN) will process the first word and feed the results back to the layer that processes the next word. This enables it to track an entire sentence rather than processing each word individually.
The shortcomings of recurrent neural networks (RNN) limit their usefulness. First, they are very slow to process. Because they must process data sequentially, they cannot take advantage of parallel computing hardware and graphics processing units (GPUs) for training and inference. Second, they cannot handle long sequences of text. As the recurrent neural network (RNN) goes deeper into the text excerpt, the effect of the first few words of the sentence gradually diminishes. This problem known as "vanishing gradient" occurs when two linked words are far apart in the text. Third, they only capture the relationship between a word and the words that precede it. In fact, the meaning of words depends on the words that come before and after them.
The Long Short-Term Memory (LSTM) network is the successor of the Recurrent Neural Network (RNN), which can solve the vanishing gradient problem to a certain extent and can handle larger text sequences. But Long Short-Term Memory (LSTM) is even slower to train than Recurrent Neural Networks (RNN), and still cannot take full advantage of parallel computing. They still rely on serial processing of text sequences.
A paper published in 2017 called "Attention is All That Is Needed" introduced Transformer, stating that Transformer
made two key contributions: First, they made parallel processing of entire sequences a possible, thereby scaling the speed and capacity of sequential deep learning models to unprecedented speeds. Second, they introduce "attention mechanisms" that can track relationships between words in very long text sequences, both forward and backward.
Before discussing how the Transformer model works, it is necessary to discuss the types of problems that sequence neural networks solve.
- Vector-to-sequence models take a single input (such as an image) and generate a sequence of data (such as a description).
- Sequence-to-vector models take sequence data as input, such as product reviews or social media posts, and output a single value, such as a sentiment score.
- A "sequence-to-sequence" model takes as input a sequence, such as an English sentence, and outputs another sequence, such as the French translation of that sentence.
Despite their differences, all these types of models have one thing in common - they learn expressions. The job of a neural network is to convert one type of data into another type of data. During training, the neural network's hidden layer (the layer between the input and output) adjusts its parameters in a way that best represents the characteristics of the input data type and maps them to the output. The original Transformer was designed as a sequence-to-sequence (seq2seq) model for machine translation (of course, sequence-to-sequence models are not limited to translation tasks). It consists of an encoder module that compresses the input string from the source language into a vector that represents words and their relationships to each other. The decoder module converts the encoded vector into a text string in the target language.
2. Marking and embedding
The input text must be processed and converted into a unified format, and then can be input to Transformer. First, the text is passed through a "tokenizer," which breaks it into chunks of characters that can be processed individually. The tokenization algorithm can depend on the application. In most cases, each word and punctuation mark roughly counts as one token. Some suffixes and prefixes count as separate tokens (for example, "ize", "ly", and "pre"). The tokenizer generates a list of numbers representing the token IDs of the input text.
The tokens are then converted into "word embeddings". Word embedding is a vector that attempts to capture the value of a word in a multi-dimensional space. For example, the words "cat" and "dog" may have similar values on some dimensions because they are both used in sentences about animals and pets. However, on other dimensions that distinguish felines from canines, "cat" is closer to "lion" than "wolf." Likewise, "Paris" and "London" are probably closer to each other because they are both cities. However, "London" is closer to "England" and "Paris" is closer to "France" because of the differentiating dimensions of a country. And word embeddings typically have hundreds of dimensions.
Word embeddings are created through embedding models that are trained separately from the Transformer. There are several pre-trained embedding models for language tasks.
3. Pay attention to the layer
contains several attention blocks and feed-forward layers to gradually capture more complex relationships.
The decoder uses the same tokenization, word embedding and attention mechanisms to process the expected results and create attention vectors. It then passes this attention vector and attention layer in the encoder module to establish a relationship between the input and output values. In a translation application, this is the part where words in the source and target language are mapped to each other. Like the encoder module, the decoder attention vectors are passed through feedforward layers. The result is then mapped to a very large pool of vectors, i.e. the size of the target data (in the case of translation, this can involve tens of thousands of words).
4. Training Transformer
##During training, Transformer provides a very large A corpus of paired examples (e.g., English sentences and their corresponding French translations). The encoder module receives and processes the complete input string. However, the decoder receives a masked version of the output string (one word at a time) and attempts to establish a mapping between the encoded attention vector and the expected result. The encoder tries to predict the next word and makes corrections based on the difference between its output and the expected result. This feedback enables the converter to modify the parameters of the encoder and decoder and gradually create the correct mapping between the input and output languages. The more training data and parameters a Transformer has, the better it is at maintaining coherence and consistency across longer sequences of text. 5. Changes in Transformer # In the machine translation example studied above, Transformer’s encoder module learns relationships between English words and sentences, while the decoder learns mappings between English and French. But not all Transformer applications require encoder and decoder modules. For example, the GPT family of large language models uses a stack of decoder modules to generate text. BERT is another variant of the Transformer model developed by Google researchers, but it only uses the encoder module. The advantage of some of these architectures is that they can be trained through self-supervised learning or unsupervised methods. BERT, for example, does most of its training by taking a large corpus of unlabeled text, masking out parts of it, and trying to predict the missing parts. It then adjusts its parameters based on how close or far away its predictions are from the actual data. By continuously repeating this process, BERT captures the relationship between different words in different scenes. After this pre-training phase, BERT can be fine-tuned for downstream tasks such as question answering, text summarization, or sentiment analysis by training on a small number of labeled examples. Using unsupervised and self-supervised pre-training can reduce the effort required to annotate training data. There’s a lot more about Transformers and the new apps they’re unlocking, which is beyond the scope of this article. Researchers are still looking for ways to get more help from Transformer. Transformer also sparked discussions about language understanding and general artificial intelligence. What is clear is that the Transformer, like other neural networks, is a statistical model capable of capturing regularities in data in clever and sophisticated ways. While they don't "understand" language the way humans do, their development is still exciting and has much more to offer. Original link: https://bdtechtalks.com/2022/05/02/what-is-the-transformer/The above is the detailed content of What is the Transformer machine learning model?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

In the fields of machine learning and data science, model interpretability has always been a focus of researchers and practitioners. With the widespread application of complex models such as deep learning and ensemble methods, understanding the model's decision-making process has become particularly important. Explainable AI|XAI helps build trust and confidence in machine learning models by increasing the transparency of the model. Improving model transparency can be achieved through methods such as the widespread use of multiple complex models, as well as the decision-making processes used to explain the models. These methods include feature importance analysis, model prediction interval estimation, local interpretability algorithms, etc. Feature importance analysis can explain the decision-making process of a model by evaluating the degree of influence of the model on the input features. Model prediction interval estimate

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Common challenges faced by machine learning algorithms in C++ include memory management, multi-threading, performance optimization, and maintainability. Solutions include using smart pointers, modern threading libraries, SIMD instructions and third-party libraries, as well as following coding style guidelines and using automation tools. Practical cases show how to use the Eigen library to implement linear regression algorithms, effectively manage memory and use high-performance matrix operations.

Translator | Reviewed by Li Rui | Chonglou Artificial intelligence (AI) and machine learning (ML) models are becoming increasingly complex today, and the output produced by these models is a black box – unable to be explained to stakeholders. Explainable AI (XAI) aims to solve this problem by enabling stakeholders to understand how these models work, ensuring they understand how these models actually make decisions, and ensuring transparency in AI systems, Trust and accountability to address this issue. This article explores various explainable artificial intelligence (XAI) techniques to illustrate their underlying principles. Several reasons why explainable AI is crucial Trust and transparency: For AI systems to be widely accepted and trusted, users need to understand how decisions are made

1. Introduction Over the past few years, YOLOs have become the dominant paradigm in the field of real-time object detection due to its effective balance between computational cost and detection performance. Researchers have explored YOLO's architectural design, optimization goals, data expansion strategies, etc., and have made significant progress. At the same time, relying on non-maximum suppression (NMS) for post-processing hinders end-to-end deployment of YOLO and adversely affects inference latency. In YOLOs, the design of various components lacks comprehensive and thorough inspection, resulting in significant computational redundancy and limiting the capabilities of the model. It offers suboptimal efficiency, and relatively large potential for performance improvement. In this work, the goal is to further improve the performance efficiency boundary of YOLO from both post-processing and model architecture. to this end

At the forefront of software technology, UIUC Zhang Lingming's group, together with researchers from the BigCode organization, recently announced the StarCoder2-15B-Instruct large code model. This innovative achievement achieved a significant breakthrough in code generation tasks, successfully surpassing CodeLlama-70B-Instruct and reaching the top of the code generation performance list. The unique feature of StarCoder2-15B-Instruct is its pure self-alignment strategy. The entire training process is open, transparent, and completely autonomous and controllable. The model generates thousands of instructions via StarCoder2-15B in response to fine-tuning the StarCoder-15B base model without relying on expensive manual annotation.

In order to align large language models (LLMs) with human values and intentions, it is critical to learn human feedback to ensure that they are useful, honest, and harmless. In terms of aligning LLM, an effective method is reinforcement learning based on human feedback (RLHF). Although the results of the RLHF method are excellent, there are some optimization challenges involved. This involves training a reward model and then optimizing a policy model to maximize that reward. Recently, some researchers have explored simpler offline algorithms, one of which is direct preference optimization (DPO). DPO learns the policy model directly based on preference data by parameterizing the reward function in RLHF, thus eliminating the need for an explicit reward model. This method is simple and stable

MetaFAIR teamed up with Harvard to provide a new research framework for optimizing the data bias generated when large-scale machine learning is performed. It is known that the training of large language models often takes months and uses hundreds or even thousands of GPUs. Taking the LLaMA270B model as an example, its training requires a total of 1,720,320 GPU hours. Training large models presents unique systemic challenges due to the scale and complexity of these workloads. Recently, many institutions have reported instability in the training process when training SOTA generative AI models. They usually appear in the form of loss spikes. For example, Google's PaLM model experienced up to 20 loss spikes during the training process. Numerical bias is the root cause of this training inaccuracy,
