Home Technology peripherals AI Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Feb 01, 2024 pm 05:15 PM
industry sparse model Large visual language model

Large Scale Visual Language Model (LVLM) can improve performance by scaling the model. However, increasing the parameter size increases training and inference costs because the calculation of each token activates all model parameters.

Researchers from Peking University, Sun Yat-sen University and other institutions jointly proposed a new training strategy called MoE-Tuning to solve the performance degradation problem related to multi-modal learning and model sparsity. MoE-Tuning is able to build sparse models with a surprising number of parameters but constant computational cost. In addition, the researchers also proposed a new sparse LVLM architecture based on MoE, called the MoE-LLaVA framework. In this framework, only the top k experts are activated through the routing algorithm, and the remaining experts remain inactive. In this way, the MoE-LLaVA framework can more efficiently utilize the resources of the expert network during the deployment process. These research results provide new solutions to solve the challenges of multi-modal learning and model sparsity of LVLM models.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

  • Paper address: https://arxiv.org/abs/2401.15947

  • Project address: https://github.com/PKU-YuanGroup/MoE-LLaVA

  • Demo address: https://huggingface.co/spaces/LanguageBind/MoE-LLaVA

  • Paper title: MoE-LLaVA: Mixture of Experts for Large Vision-Language Models

MoE-LLaVA has only 3B sparse activation parameters, but its performance is similar to LLaVA- 1.5-7B is comparable on various visual understanding datasets and even surpasses LLaVA-1.5-13B on the object illusion benchmark. Through MoE-LLaVA, this study aims to establish a benchmark for sparse LVLMs and provide valuable insights for future research to develop more efficient and effective multi-modal learning systems. The MoE-LLaVA team has made all data, code and models open.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 1 Comparison of MoE-LLaVA’s hallucination performance with other LVLM

Method Introduction

MoE-LLaVA adopts a three-stage training strategy.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 2 MoE-Tuning flow chart

As shown in Figure 2, the vision encoder processes the input image to obtain a visual token sequence. A projection layer is used to map visual tokens into dimensions acceptable to the LLM. Similarly, the text paired with the image is projected through a word embedding layer to obtain the sequence text token.

Phase 1: As shown in Figure 2, the goal of Phase 1 is to adapt the visual token to LLM and give LLM the ability to understand the entities in the picture. MoE-LLaVA uses an MLP to project image tokens into the input domain of LLM, which means that small image patches are treated as pseudo-text tokens by LLM. At this stage, LLM is trained to describe images and understand higher-level image semantics. The MoE layer will not be applied to LVLM at this stage.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 3 More specific training framework and training strategy

Phase 2: Using multi-modal instruction data Fine-tuning is a key technology to improve the capability and controllability of large models, and at this stage LLM is adjusted to LVLM with multi-modal understanding capabilities. At this stage, the research adds more complex instructions, including advanced tasks such as picture logical reasoning and text recognition, which require the model to have stronger multi-modal understanding capabilities. Generally speaking, the LVLM of the dense model is trained at this point. However, the research team found that it is challenging to convert the LLM to LVLM and sparse the model at the same time. Therefore, MoE-LLaVA will use the weights of the second stage as the initialization of the third stage to reduce the difficulty of sparse model learning.

Phase 3: MoE-LLaVA copies multiple copies of FFN as the initialization weight of the expert set. When visual tokens and text tokens are fed into the MoE layer, the router will calculate the matching weight of each token and the experts, and then each token will be sent to the most matching top-k experts for processing, and finally based on the weight of the router The weighted summation is aggregated into the output. When the top-k experts are activated, the remaining experts remain inactive, and this model constitutes MoE-LLaVA with infinite possible sparse pathways.

Experiment

##As shown in Figure 4, due to MoE-LLaVA is the first sparse model based on LVLM equipped with soft router, so this study summarizes the previous models as dense models. The research team verified the performance of MoE-LLaVA on 5 image question and answer benchmarks, and reported the amount of activated parameters and image resolution. Compared with the SOTA method LLaVA-1.5, MoE-LLaVA-2.7B×4 demonstrates strong image understanding capabilities, and its performance on 5 benchmarks is very close to LLaVA-1.5. Among them, MoE-LLaVA uses 3.6B sparse activation parameters and exceeds LLaVA-1.5-7B on SQAI by 1.9%. It is worth noting that due to the sparse structure of MoE-LLaVA, only 2.6B activation parameters are needed to fully surpass IDEFICS-80B.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 4 Performance of MoE-LLaVA on 9 benchmarks

In addition, the research team also paid attention to the recent small visual language model TinyGPT-V, MoE-LLaVA-1.8B×4, which exceeded TinyGPT-V by 27.5% and 10% respectively in GQA and VisWiz under equivalent activation parameters. This signifies MoE-LLaVA's powerful understanding capabilities in natural vision.

To more comprehensively verify the multi-modal understanding capabilities of MoE-LLaVA, this study evaluated model performance on 4 benchmark toolkits. The benchmark toolkit is a toolkit for verifying whether the model can answer questions in natural language. Usually the answers are open and have no fixed template. As shown in Figure 4, MoE-LLaVA-1.8B×4 outperforms Qwen-VL, which uses a larger image resolution. These results show that MoE-LLaVA, a sparse model, can achieve performance comparable to or even exceeding that of dense models with fewer activation parameters.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 5 Performance evaluation of MoE-LLaVA on hallucinatory object detection

This study uses the POPE evaluation pipeline to verify the object illusion of MoE-LLaVA. The results are shown in Figure 5. MoE-LLaVA shows the best performance, which means that MoE-LLaVA tends to generate images that are consistent with the given image. Object. Specifically, MoE-LLaVA-1.8B×4 surpassed LLaVA with an activation parameter of 2.2B. In addition, the research team observed that the yes ratio of MoE-LLaVA is in a relatively balanced state, which shows that the sparse model MoE-LLaVA can make correct feedback according to the problem.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 6 Expert load visualization

Figure 6 Display Received the expert load of MoE-LLaVA-2.7B×4-Top2 on ScienceQA. Overall, during training initialization, the load of experts in all MoE layers is relatively balanced. However, as the model gradually becomes sparse, the load of experts on layers 17 to 27 suddenly increases, and even covers almost all tokens. For shallow layers 5-11, experts 2, 3, and 4 mainly work together. It is worth noting that Expert 1 works almost exclusively on layers 1-3 and gradually drops out of the work as the model gets deeper. Therefore, MoE-LLaVA experts have learned a specific pattern that enables expert division of labor according to certain rules.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 7 Visualization of modal distribution

##Figure 7 shows the modal distribution of different experts. The study found that the routing distribution of text and image is very similar. For example, when expert 3 works hard on layers 17-27, the proportion of text and image processed by it is similar. This shows that MoE-LLaVA has no clear preference for modality.

#The study also observed the behavior of experts at the token level and tracked the trajectory of all tokens in the sparse network on downstream tasks. For all activated pathways of text and image, this study used PCA to reduce dimensionality to obtain the main 10 pathways, as shown in Figure 8. The research team found that for an unseen text token or image token, MoE-LLaVA always prefers to dispatch experts 2 and 3 to handle the depth of the model. Experts 1 and 4 tend to deal with initialized tokens. These results can help us better understand the behavior of sparse models in multi-modal learning and explore unknown possibilities.

Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B

Figure 8 Visualization of activated pathways

The above is the detailed content of Sparse the multi-modal large model, and the 3B model MoE-LLaVA is comparable to LLaVA-1.5-7B. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Two Point Museum: All Exhibits And Where To Find Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

AI in use | Microsoft CEO's crazy Amway AI game tortured me thousands of times AI in use | Microsoft CEO's crazy Amway AI game tortured me thousands of times Aug 14, 2024 am 12:00 AM

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Oh my God, AI has really become a genius. Recently, it has become a hot topic that it is difficult to distinguish the authenticity of AI-generated pictures. (For details, please go to: AI in use | Become an AI beauty in three steps, and be beaten back to your original shape by AI in a second) In addition to the popular AI Google lady on the Internet, various FLUX generators have emerged on social platforms

See all articles