Home > Technology peripherals > AI > The latest introduction to 'Multimodal LLM'! Data and proceedings are packaged and taken away directly

The latest introduction to 'Multimodal LLM'! Data and proceedings are packaged and taken away directly

PHPz
Release: 2023-06-09 22:58:37
forward
967 people have browsed it

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

Progress tracking link (Awesome-MLLM, real-time updates): https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models

In recent years, research on Large Language Models (LLM) has made significant progress (such as GPT- 3, LLaMa, ChatGPT, GPT-4), these models have demonstrated excellent performance on various natural language processing (NLP) tasks.

Through pre-training on massive data, LLM has gained rich knowledge and powerful reasoning capabilities. Just need to input some user instructions, and these models can parse the instructions, perform reasoning and give answers that meet the user's expectations.

#Some typical capabilities LLM has include:

  • · Execute new tasks not seen during training;
  • · Complete new tasks with a few examples;
  • · Perform complex reasoning tasks through reasoning chains;
  • · Coordinate various models and tools to complete complex tasks .

There are many key ideas and technologies behind these capabilities, including Instruction Tuning, In-Context Learning ) and Chain of Thought, etc.

Multimodal large-scale language model

Although large language models have made great progress in the field of NLP, there are fewer corresponding models and technologies in the multimodal field. Exploration, and traditional visual-language models still have limitations such as insufficient generalization and lack of reasoning capabilities.

To this end, many scholars have recently turned their attention to an emerging direction: Multimodal Large Language Models (MLLM).

The main idea is to use LLM as the "brain" to integrate, reason, analyze and make decisions on the input multi-modal information, thereby completing the tasks delivered by humans.

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

From the perspective of developing general artificial intelligence, compared with LLM, MLLM has taken another step forward and has the following advantages:

· More in line with human’s habit of understanding the world. Humans have multiple senses and receive information from multiple modalities, which are often complementary and synergistic. Therefore, the use of multimodal information generally enables better recognition and completion of complex tasks;

· More powerful and user-friendly (User-Friendly) interface. By supporting multi-modal input, users can convey information in a more flexible way;

· Wider task support. LLM can usually only complete NLP related tasks, while MLLM can complete more tasks by accessing multi-modality.

From a system design perspective, MLLM can be divided into two categories:

· LLM serves as a reasoner and supports multi-modality Cognitive reasoning system for input;

· LLM as a multi-tool collaboration system for planner/scheduler/decision maker.

The former generally converts multi-modal information into a form that LLM can directly receive and process through a trainable multi-modal conversion interface. , enabling LLM to perform cognition and reasoning based on these multi-modal information and user instructions.

The latter usually uses LLM as a planner/scheduler/decision maker [1] to decompose complex tasks delivered by users into simpler sub-tasks, dispatch them to appropriate models/tools, and finally integrate the results and output.

We adopted another perspective, focusing on the key technologies and implementation methods behind MLLM, conducted research and summary on related work, and divided MLLM into the following categories:

·Multimodal Instruction Tuning

·Multimodal Instruction Tuning Multimodal In-Context Learning

·Multimodal Chain-of-Thought

·LLM-Aided Visual Reasoning

Below we will give a brief introduction to these types of work.

Multimodal Instruction Tuning

The basic method of multimodal instruction fine-tuning is to use a unified template to unify all types of data and The form of instructions describes task requirements, forming multi-modal instruction data, and then uses this data to fine-tune MLLM.

Due to the consistency of the instruction form during training and testing, LLM can generalize to other tasks more flexibly and obtain powerful zero samples by virtue of its powerful semantic understanding and reasoning capabilities. learning ability.

The basic form of multimodal instruction data can be summarized as (instruction, multimodal input, answer) triplet.

An intuitive way to obtain this kind of data is to transform the benchmark data set. We take image description (Image Captioning) as an example, as shown in Figure 1 below:

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

Figure 1. Example of multi-modal command data

Original Caption data The sample includes a picture and a text description (Ground Truth). This data-GT paired data naturally constitutes the multi-modal input and answer parts of the instruction data.

The instruction part is the description of the corresponding task, which is generally written manually or generated by calling GPT.

When fine-tuning multi-modal instructions, MLLM converts multi-modal inputs and sends them to LLM. LLM predicts answers based on multi-modal information and instruction text.

Multimodal In-Context Learning

The core idea of ​​multimodal context learning is to learn from analogies. For example, the forms we generally come into contact with when studying are as follows:

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

# By studying examples, when we encounter new problems, we can use them through categories Learn basic ideas and methods of proportion problems to solve new problems.

In addition, the example questions can also standardize our answer format, which is more conducive to getting correct answers that meet the expected requirements.

As shown in Figure 2 below, let the model predict the calculation result of 3x7 through an example.

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

Figure 2. Example of multi-modal context data, using the example to let the model predict the calculation results of 3x7

Multimodal Chain-of-Thought

The thinking chain is a series of intermediate reasoning steps [2]. The basic idea of ​​the multi-modal thinking chain is to make the model learn to output intermediate steps step by step, and finally deduce the final answer, as shown in Figure 3 below:

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

##Figure 3. Example of multi-modal thinking chain data

Compared to directly outputting answers, thinking chain:

· More in line with human reasoning habits: based on previous reasoning steps and results, it gradually leads to the final answer;

· Suitable for complex reasoning tasks, solving complex problems step by step, improving Accuracy of answers.

LLM-Aided Visual Reasoning

Use LLM as a decision-making and reasoning mechanism, call various multi-modal models and tools and integrate the output , get the final answer. According to the way to complete the task, it can generally be divided into single-wheel model and multi-wheel model.

The basic idea of ​​the single-round model is that LLM acts as a planner, scheduler and decision maker to coordinate various models/tools to complete tasks. It generally needs to complete the following functions[1]:

·Planner: Decompose complex tasks into solvable subtasks;

# #· Scheduler: Dispatches subtasks to appropriate models/tools;

· Decision maker: Manage the order of execution of subtasks and integrate the results of subtasks to get the final answer.

The multi-round model is based on the idea of ​​iteration and continuously accumulates visual cognition until it is confident enough to obtain the final answer. In this process, LLM needs to integrate the previous steps (the questions raised and the visual cognitive information obtained) to determine whether the final answer can be output [3].

For related papers, please see: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

The latest introduction to Multimodal LLM! Data and proceedings are packaged and taken away directly

#

The above is the detailed content of The latest introduction to 'Multimodal LLM'! Data and proceedings are packaged and taken away directly. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template