Home Technology peripherals AI Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory

Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory

Jul 16, 2024 am 11:57 AM
project 邂维南 Memory3

2.4B of Memory3 achieves better performance than larger LLM and RAG models.

In recent years, large language models (LLMs) have received unprecedented attention due to their extraordinary performance. However, LLM is expensive to train and infer, and people have been trying to reduce the cost through various optimization methods.

In this article, researchers from Shanghai Algorithm Innovation Research Institute, Peking University and other institutions were inspired by the memory hierarchy of the human brain. They equipped LLM with explicit memory (a memory format that is cheaper than model parameters and RAG). ) to reduce this cost. Conceptually, LLMs can enjoy smaller parameter sizes, training costs, and inference costs since most of their knowledge is externalized into explicit memory. T Paper Address: https: //arxiv.org/pdf/2407.01178
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Thesis Title: Memory
    3
  • : Language Modeling with Explicit Memory
  • as a preliminary concept proof proof , the researchers trained a 2.4B LLM from scratch, which achieved better performance than larger LLM and RAG models, and achieved higher decoding speed than RAG. This model is named Memory
    3
  • because in LLM, explicit memory is the third form of memory after implicit memory (model parameters) and working memory (context key values).

Specifically, this paper introduces a new memory format, explicit memory, which is characterized by relatively low writing costs and relatively low reading costs. As shown in Figure 1, the model first converts the knowledge base (or any text dataset) into explicit memories implemented as sparse attention key-values, then calls these memories during inference and integrates them into the self-attention layer middle.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memoryThe new memory format defines a new memory hierarchy:
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memoryIn addition, this article also introduces a memory circuit theory that supports knowledge externalization and proposes memory sparsity that can make storage tractable Mechanisms and a two-stage pretraining protocol to promote memory formation.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memoryIn summary:

Memory
3
Utilizes explicit memory during the inference process, which reduces the burden of memorizing specific knowledge for model parameters;
  • Explicit memory is built from Encoded in the knowledge base, where the sparse memory format maintains true storage size;
    The researchers trained a Memory
  • 3
  • model from scratch with 2.4B non-embedded parameters, and its performance exceeded that of larger scales SOTA model. It also has better performance and faster inference than RAG;
  • Additionally, Memory
    3 improves factuality and mitigates hallucinations, and enables rapid adaptation to professional tasks.
  • Method introduction

Memory circuit theory helps determine what knowledge can be stored as explicit memory, and which model architecture is suitable for reading and writing explicit memory.

Researchers regard the input-output relationship as the internal mechanism of the circuit, and define knowledge as the input-output relationship and its circuit. By manipulating these circuits, one can separate much of the knowledge from the LLM while keeping its functionality intact.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memoryMemory
3
: In terms of architecture, the goal of this article is to design an explicit memory mechanism for Transformer LLM so that its writing cost and reading cost are relatively low. In addition, this article hopes to limit the modifications to the Transformer architecture to the smallest possible scope without adding any new trainable parameters, so that most existing Transformer LLMs can be converted to Memory with almost no fine-tuning
3
models. The simple design process is as follows:
Write cost: Before inference, LLM writes each reference to explicit memory, which is saved on the drive.Memories are selected from the key vectors of the self-attention layer, so the writing process does not involve training. Each reference is processed independently, avoiding the cost of long context attention.

Read cost: During inference, explicit memory is retrieved from the drive and read by self-attention along with the usual context key values. Each memory consists of a very small number of key values ​​from a small number of attention heads, greatly reducing additional computation, GPU storage, drive storage and load time. It allows LLM to retrieve many references frequently with limited impact on decoding speed.

The reasoning process is shown in Figure 9. Whenever LLM generates 64 tokens, it discards the current memory, uses these 64 tokens as query text to retrieve 5 new memories, and continues to use these memories. decoding. Likewise, when processing cues, LLM retrieves 5 memories for every 64-token block. Each block focuses on its own memory, and the memory may vary between blocks.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Writing and reading memories: During inference, LLM can directly read retrieved explicit memories through its self-attention layer by concatenating them with contextual key values ​​(Figure 9). Specifically, for each attention head h of the l-th layer, if it is selected as a memory head, then its output Y^(l,h) will change:
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
In addition, this study Explicit memory uses parallel position encoding, that is, all key positions are located in the same interval of length 128, as shown in Figure 9.

Two-stage pre-training: Pre-training consists of two stages, warmup and continuous training. Only the ongoing training phase involves explicit memory, while the warmup phase uses the same format as normal pre-training.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Figure 13 plots the training loss and learning rate schedule during the warmup phase.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Figure 14 plots the training loss and learning rate schedule during the continuous training phase.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Experimental results

The researchers evaluated the Memory3 model’s general abilities (benchmark tasks), conversational abilities, professional abilities (law and medicine), and hallucinations. In addition, the researchers also measured the decoding speed of Memory3 and compared it with similar and larger SOTA LLM and RAG models.

The assessment results of general ability are shown below, and the results show that explicit memory increased the average score by 2.51%. In comparison, the score difference between Llama2-7B and 13B is 4.91%. Explicit memory can increase the "effective model size" by 2.51/4.91 ≈ 51.1%.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Next, the authors evaluated the dialogue skills of Memory3, and the results are listed in Table 18, showing that the model outperforms Vicuna-7B, Falcon-40B-Instruct and ChatGLM2-6B with fewer parameters.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Currently, LLM still faces hallucination issues. Conceptually, Memory3 should be less susceptible to hallucinations because its explicit memory corresponds directly to the reference text. To evaluate hallucinations, the researchers selected two English datasets for evaluation. The results are shown in Table 19. Memory3 achieves the highest scores on most tasks.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
One benefit of using explicit memory is that LLM can easily adapt to new domains and tasks by updating its knowledge base. Simply import task-related references into Memory3's knowledge base and optionally convert them to explicit memory in the event of a warm start.The model can then leverage this new knowledge for inference, skipping the more costly and potentially lossy fine-tuning process, and run faster than RAG. This cost reduction has been demonstrated in Figure 4 and could facilitate the rapid deployment of LLM in various industries.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
The table below shows that Memory3 performs better than most models.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
Finally, the researchers evaluated the decoding speed or throughput of Memory3 by the number of tokens generated per second.
Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory
For more information, please refer to the original paper.

The above is the detailed content of Academician E Weinan leads the new work: Large models not only have RAG and parameter storage, but also a third kind of memory. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1664
14
PHP Tutorial
1268
29
C# Tutorial
1243
24
The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

LLM is really not good for time series prediction. It doesn't even use its reasoning ability. LLM is really not good for time series prediction. It doesn't even use its reasoning ability. Jul 15, 2024 pm 03:59 PM

Can language models really be used for time series prediction? According to Betteridge's Law of Headlines (any news headline ending with a question mark can be answered with "no"), the answer should be no. The fact seems to be true: such a powerful LLM cannot handle time series data well. Time series, that is, time series, as the name suggests, refers to a set of data point sequences arranged in the order of time. Time series analysis is critical in many areas, including disease spread prediction, retail analytics, healthcare, and finance. In the field of time series analysis, many researchers have recently been studying how to use large language models (LLM) to classify, predict, and detect anomalies in time series. These papers assume that language models that are good at handling sequential dependencies in text can also generalize to time series.

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

See all articles