Table of Contents
Neural Network
ChatGPT model size
Encoders, decoders and RNN
Transformer vs. Attention
Generative pre-training
Bringing it all together
Home Technology peripherals AI AI Encyclopedia: How ChatGPT works

AI Encyclopedia: How ChatGPT works

Apr 12, 2023 pm 01:31 PM
AI chatgpt

AI Encyclopedia: How ChatGPT works

ChatGPT quickly gained the attention of millions of people, but many were wary because they didn’t understand how it worked. And this article is an attempt to break it down so it’s easier to understand.

However, at its core, ChatGPT is a very complex system. If you want to play with ChatGPT or figure out what it is, the core interface is a chat window where you can ask questions or provide queries and the AI ​​will respond. An important detail to remember is that in chat, context is preserved, meaning messages can reference previous information and ChatGPT will be able to understand this contextually.

What happens when a query is entered in the chat box?

Neural Network

First of all, there is a lot to be discovered under the framework of ChatGPT. Machine learning has been developing rapidly over the past 10 years, and ChatGPT utilizes many state-of-the-art technologies to achieve its results.

AI Encyclopedia: How ChatGPT works

Neural networks are layers of interconnected "neurons", each neuron is responsible for receiving input, processing the input, and passing it to the network the next neuron in . Neural networks form the backbone of today's artificial intelligence. The input is usually a set of numerical values ​​called "features" that represent some aspect of the data being processed. For example, in the case of language processing, the features might be word embeddings that represent the meaning of each word in a sentence.

Word embeddings are simply a numerical representation of text that a neural network will use to understand the semantics of the text, which can then be used for other purposes, such as responding in a semantically logical way!

So after pressing enter in ChatGPT, the text is first converted into word embeddings, which are trained on text from all over the internet. There is then a neural network that is trained to output a set of appropriate response word embeddings given the input word embeddings. These embeddings are then translated into human-readable words using the inverse operation applied to the input query. This decoded output is what ChatGPT prints.

ChatGPT model size

The computational cost of conversion and output generation is very high. ChatGPT sits on top of GPT-3, a large language model with 175 billion parameters. This means there are 175 billion weights in the extensive neural network that OpenAI tuned using its large dataset.

So each query requires at least two 175 billion calculations, which adds up quickly. OpenAI may have found a way to cache these calculations to reduce computational costs, but it's unknown if this information has been published anywhere. Additionally, GPT-4, expected to be released early this year, is said to have 1,000 times more parameters!

There will be real costs in terms of computational complexity! Don’t be surprised if ChatGPT becomes a paid product soon, as OpenAI currently Millions of dollars are being spent to run it for free.

Encoders, decoders and RNN

A commonly used neural network structure in natural language processing is the encoder-decoder network. These networks are designed to "encode" an input sequence into a compact representation and then "decode" that representation into an output sequence.

Traditionally, encoder-decoder networks have been paired with recurrent neural networks (RNN) for processing sequential data. The encoder processes the input sequence and produces a fixed-length vector representation, which is then passed to the decoder. The decoder processes this vector and produces an output sequence.

Encoder-decoder networks have been widely used in tasks such as machine translation, where the input is a sentence in one language and the output is the translation of that sentence into another language. They are also applied to summarization and image caption generation tasks.

AI Encyclopedia: How ChatGPT works

Transformer vs. Attention

Similar to the encoder-decoder structure, the transformer consists of two components; however, the converter is different in that it uses a self-attention mechanism that allows each element of the input to focus on all other elements, allowing it to capture the relationship between elements regardless of their distance from each other.

Transformer also uses multi-head attention, allowing it to focus on multiple parts of the input simultaneously. This enables it to capture complex relationships in input text and produce highly accurate results.

When the "Attention is All You Need" paper was published in 2017, the transformer replaced the encoder-decoder architecture as the state-of-the-art model for natural language processing because it could achieve better performance on longer texts. good performance.

AI Encyclopedia: How ChatGPT works

Transformer architecture, from https://arxiv.org/pdf/1706.03762.pdf

Generative pre-training

Generative pre-training is a technique that has been particularly successful in the field of natural language processing. It involves training extensive neural networks on massive data sets in an unsupervised manner to learn a universal representation of the data. This pre-trained network can be fine-tuned for specific tasks, such as language translation or question answering, thereby improving performance.

AI Encyclopedia: How ChatGPT works

Generative pre-training architecture, excerpted from "Improving Language Understanding Through Generative Pre-training"

In the example of ChatGPT , which meant fine-tuning the last layer of the GPT-3 model to fit the use case of answering questions in chat, which also leverages human tagging. The following figure can provide a more detailed understanding of ChatGPT fine-tuning:

AI Encyclopedia: How ChatGPT works

ChatGPT fine-tuning steps, from https://arxiv.org/pdf/2203.02155.pdf

Bringing it all together

So there are many moving parts under the umbrella of ChatGPT that will only continue to grow. It will be very interesting to see how it continues to develop, as advancements in many different areas will help GPT-like models gain further adoption.

Over the next year or two, we may see significant disruption from this new enabling technology.

The above is the detailed content of AI Encyclopedia: How ChatGPT works. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

ChatGPT now allows free users to generate images by using DALL-E 3 with a daily limit ChatGPT now allows free users to generate images by using DALL-E 3 with a daily limit Aug 09, 2024 pm 09:37 PM

DALL-E 3 was officially introduced in September of 2023 as a vastly improved model than its predecessor. It is considered one of the best AI image generators to date, capable of creating images with intricate detail. However, at launch, it was exclus

Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Bytedance Cutting launches SVIP super membership: 499 yuan for continuous annual subscription, providing a variety of AI functions Jun 28, 2024 am 03:51 AM

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Context-augmented AI coding assistant using Rag and Sem-Rag Context-augmented AI coding assistant using Rag and Sem-Rag Jun 10, 2024 am 11:08 AM

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Can fine-tuning really allow LLM to learn new things: introducing new knowledge may make the model produce more hallucinations Jun 11, 2024 pm 03:57 PM

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework To provide a new scientific and complex question answering benchmark and evaluation system for large models, UNSW, Argonne, University of Chicago and other institutions jointly launched the SciQAG framework Jul 25, 2024 am 06:42 AM

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time SOTA performance, Xiamen multi-modal protein-ligand affinity prediction AI method, combines molecular surface information for the first time Jul 17, 2024 pm 06:37 PM

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Laying out markets such as AI, GlobalFoundries acquires Tagore Technology's gallium nitride technology and related teams Jul 15, 2024 pm 12:21 PM

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G

SearchGPT: Open AI takes on Google with its own AI search engine SearchGPT: Open AI takes on Google with its own AI search engine Jul 30, 2024 am 09:58 AM

Open AI is finally making its foray into search. The San Francisco company has recently announced a new AI tool with search capabilities. First reported by The Information in February this year, the new tool is aptly called SearchGPT and features a c

See all articles