Table of Contents
Evaluation scheme
Conclusion
Lynx model
Model effect
Cases display
Summary
Home Technology peripherals AI The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTA

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTA

Jul 17, 2023 pm 09:57 PM
Model Open source

Current Large Language Models (LLMs) such as GPT4 exhibit excellent multi-modal capabilities in following open instructions given an image. However, the performance of these models heavily depends on the choices of network structure, training data, and training strategies, but these choices have not been widely discussed in the previous literature. In addition, there is currently a lack of suitable benchmarks to evaluate and compare these models, which limits the development of multimodal LLMs.

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTAPicture

  • Paper: https://arxiv.org/abs/2307.02469
  • Website: https://lynx-llm.github.io/
  • Code: https://github.com/bytedance/lynx-llm

In this article, the author conducts a systematic and comprehensive study on the training of such models from both quantitative and qualitative aspects. More than 20 variants were set up. For the network structure, different LLMs backbones and model designs were compared; for the training data, the impact of data and sampling strategies was studied; in terms of instructions, the effect of diverse prompts on the model's instruction following ability was explored. Influence. For benchmarks, the article first proposes an open visual question answering evaluation set Open-VQA including image and video tasks.

Based on the experimental conclusions, the author proposed Lynx, which shows the most accurate multi-modal understanding compared with the existing open source GPT4-style model capabilities while maintaining the best multi-modal generation capabilities.

Evaluation scheme

Unlike typical visual language tasks, the main challenge in evaluating GPT4-style models lies in balance Performance in two aspects: text generation ability and multimodal understanding accuracy . To solve this problem, the authors propose a new benchmark Open-VQA including video and image data, and conduct a comprehensive evaluation of current open source models.

Specifically, two quantitative evaluation schemes are adopted:

  • Collect open visual question answering (Open-VQA) tests Set, which contains different categories of questions on objects, OCR, counting, reasoning, action recognition, time sequencing, etc. Unlike the VQA data set, which has standard answers, Open-VQA's answers are open-ended. To evaluate the performance on Open-VQA, GPT4 is used as the discriminator, and the results are 95% consistent with human evaluation.
  • In addition, the author used the OwlEval data set provided by mPLUG-owl [1] to evaluate the text generation ability of the model. Although it only contains 50 pictures and 82 questions, it covers stories Generation, ad generation, code generation and other various problems, and recruit human annotators to score the performance of different models.

Conclusion

In order to deeply study the training strategy of multi-modal LLMs, the author mainly starts from the network structure (prefix fine-tuning/cross-attention force), training data (data selection and combination ratio), instructions (single instruction/diversified instructions), LLMs model (LLaMA [5]/Vicuna [6]), image pixels (420/224) and other aspects are set With more than twenty variations, the following main conclusions have been drawn through experiments:

  • #Multimodal LLMs are less capable of following instructions than LLMs. For example, InstructBLIP [2] tends to generate short replies regardless of input instructions, while other models tend to generate long sentences regardless of instructions, which the authors believe is due to a lack of high-level responses. Resulting from quality and diverse multimodal instruction data.
  • #The quality of training data is crucial to the performance of the model. Based on the results of experiments on different data, it was found that using a small amount of high-quality data performs better than using large-scale noisy data. The author believes that this is the difference between generative training and contrastive training, because generative training directly learns the conditional distribution of words rather than the similarity between text and images. Therefore, for better model performance, two things need to be met in terms of data: 1) contain high-quality smooth text; 2) text and image content are well aligned.
  • Quests and prompts are critical to zero-shot capabilities. Using diverse tasks and instructions can improve the model's zero-shot generation ability on unknown tasks, which is consistent with observations in plain text models.
  • It is important to balance correctness with language-generating ability. If the model is undertrained on downstream tasks (such as VQA), it is more likely to generate fabricated content that does not match the visual input; while if the model is overtrained on downstream tasks, it is more likely to generate fabricated content that does not match the visual input. Short answers will not be able to generate longer answers as directed by the user.
  • Prefix-finetuning (PT) is currently the best solution for multi-modal adaptation of LLMs. In experiments, the model with prefix-finetuning structure can improve the ability to follow diverse instructions faster and is easier to train than the cross-attention (CA) model structure. (prefix-tuning and cross-attention are two model structures, see the Lynx model introduction section for details)

Lynx model

The author proposed Lynx(lynx) - a prefix-finetuning GPT4-style model with two-stage training. In the first stage, approximately 120M image-text pairs are used to align visual and language embeddings; in the second stage, 20 images or videos are used for multi-modal tasks and natural language processing (NLP) ) data to adjust the model's instruction-following capabilities.

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTAPicture

The overall structure of the Lynx model is shown in Figure 1 above.

The visual input is processed by the visual encoder to obtain visual tokens (tokens) $$W_v$$. After mapping, it is spliced ​​with the instruction tokens $$W_l$$ as the input of LLMs. This structure is called "prefix-finetuning" in this article to distinguish it from the cross-attention structure used by Flamingo [3].

In addition, the author found that the training cost can be further reduced by adding Adapter (Adapter) after certain layers of frozen LLMs.

Model effect

The author evaluated the existing open source multi-modal LLMs model in Open-VQA, Mme [4] And the performance on OwlEval manual evaluation (results are shown in the chart below, and evaluation details are in the paper). It can be seen that the Lynx model has achieved the best performance in Open-VQA image and video understanding tasks, OwlEval manual evaluation and Mme Perception tasks. Among them, InstructBLIP also achieves high performance in most tasks, but its reply is too short. In comparison, in most cases, the Lynx model provides concise reasons to support the correct answer. Reply, which makes it more user-friendly (see the Cases display section below for some cases).

1. The indicator results on the Open-VQA image test set are shown in Table 1 below:

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTAPicture

2. The indicator results on the Open-VQA video test set are shown in Table 2 below.

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTApicture

3. Select the model with the top score in Open-VQA to conduct manual effect evaluation on the OwlEval evaluation set. The results are shown in Figure 4 above. It can be seen from the manual evaluation results that the Lynx model has the best language generation performance.

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTAPicture

4. In the Mme benchmark test, the Perception class task achieved the best performance , among which 7 of 14 types of subtasks have the best performance. (See the appendix of the paper for detailed results)

Cases display

Open-VQA picture cases

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTA

OwlEval cases

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTA

Open-VQA video case

The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTA

Summary

In this article, the author determined prefix-finetuning as the Open-VQA evaluation plan for the main structure of the Lynx model and open-ended answers. Experimental results show that the Lynx model performs the most accurate multi-modal understanding accuracy while maintaining the best multi-modal generation capabilities.

The above is the detailed content of The Byte team proposed the Lynx model: multi-modal LLMs understanding cognitive generation list SoTA. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Recommended: Excellent JS open source face detection and recognition project Recommended: Excellent JS open source face detection and recognition project Apr 03, 2024 am 11:55 AM

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages ​​and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

See all articles