Home Technology peripherals AI The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

Jul 17, 2024 am 02:46 AM
project Cobra

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

The AIxiv column is a column where academic and technical content is published on this site. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com.

Introduction

In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the underlying model for many downstream tasks, current MLLMs consist of the well-known Transformer network, which has less efficient quadratic computational complexity. In order to improve the efficiency of such basic models, a large number of experiments show that: (1) Cobra has extremely competitive performance with the current state-of-the-art methods with high computational efficiency (e.g., LLaVA-Phi, TinyLLaVA and MobileVLM v2), and due to Cobra linear sequence modeling, which is faster. (2) Interestingly, the results of the closed-set challenging prediction benchmark show that Cobra performs well in overcoming visual illusions and spatial relationship judgments. (3) It is worth noting that Cobra achieves comparable performance to LLaVA even when the number of parameters is only about 43% of LLaVA.

Large language models (LLMs) are limited to interacting only through language, limiting their adaptability to handle more diverse tasks. Multimodal understanding is critical to enhance a model’s ability to effectively address real-world challenges. Therefore, researchers are actively working to extend large language models to incorporate multimodal information processing capabilities. Visual-Language Models (VLMs) such as GPT-4, LLaMA-Adapter, and LLaVA have been developed to enhance the visual understanding capabilities of LLMs.

However, previous research mainly tried to obtain efficient VLMs in a similar way, that is, reducing the parameters of the basic language model or the number of visual tokens while keeping the attention-based Transformer structure unchanged. This paper proposes a different perspective: directly using the state space model (SSM) as the backbone network, an MLLM with linear computational complexity is obtained. Additionally, this paper explores and studies various modal fusion schemes to create an effective multi-modal Mamba. Specifically, this paper adopts the Mamba language model as the base model of VLM, which has shown performance that can compete with the Transformer language model, but with higher inference efficiency. Tests show that Cobra's inference performance is 3x to 4x faster than MobileVLM v2 3B and TinyLLaVA 3B of the same parameter magnitude. Even when compared to the LLaVA v1.5 model (7B parameters), which has a much higher number of parameters, Cobra still achieves matching performance on several benchmarks with about 43% the number of parameters.和 The main contributions of DEMO

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

this article of Cobra and LLAVA V1.5 7B are as follows:


  1. investigated the existing multimodilica large -scale Language models (MLLMs) often rely on Transformer networks, which exhibit quadratic computational complexity. To address this inefficiency, this paper introduces Cobra, a novel MLLM with linear computational complexity.
  2. Dives into various modal fusion schemes to optimize the integration of visual and linguistic information in the Mamba language model. Through experiments, this paper explores the effectiveness of different fusion strategies and determines the method that produces the most effective multimodal representation.
  3. Extensive experiments were conducted to evaluate the performance of Cobra with parallel studies aimed at improving the computational efficiency of underlying MLLM. Notably, Cobra achieves comparable performance to LLaVA even with fewer parameters, highlighting its efficiency.

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

  • Original link: https://arxiv.org/pdf/2403.14520v2.pdf
  • Project link: https://sites.google.com/view/cobravlm/
  • Paper title: Cobra: Extending Mamba to Multi-Modal Large Language Model for Efficient Inference

Method introduction

Model architecture

Cobra uses a classic visual encoder to connect two models The VLM structure consists of a stateful projector and the LLM language backbone. The backbone part of LLM uses the 2.8B parameter pre-trained Mamba language model, which was pre-trained on the SlimPajama data set with 600B tokens and fine-tuned with the instructions of the conversation data.网络 Cobra network structure diagram

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

Different from LLAVA, etc., COBRA uses visual representation of Dinov2 and SIGLIP fusion. By stitching the output of the two visual coders together Feeding into the projector, the model can better capture the high-level semantic features brought by SigLIP and the low-level fine-grained image features extracted by DINOv2.

Training scheme

Recent research shows that for existing training paradigms based on LLaVA (i.e., only training the pre-alignment stage of the projection layer and the fine-tuning stage of the LLM backbone once each), pre-alignment stages may be unnecessary and the fine-tuned model may still be underfitted. Therefore, Cobra abandons the pre-alignment stage and directly fine-tunes the entire LLM language backbone and projectors. This fine-tuning process was performed for two epochs with random sampling on a combined dataset consisting of:

Hybrid dataset used in LLaVA v1.5, which contains a total of 655K visual multi-turn conversations, including Academic VQA samples, as well as visual instruction tuning data in LLaVA-Instruct and plain text instruction tuning data in ShareGPT.

LVIS-Instruct-4V, which contains 220K images with visual alignment and context-aware instructions generated by GPT-4V.
  1. LRV-Instruct, a dataset containing 400K visual instructions covering 16 visual language tasks, aimed at mitigating hallucination phenomena.
  2. The entire data set contains approximately 1.2 million images and corresponding multiple rounds of conversation data, as well as plain text conversation data.

Experiment

Quantitative experiment

In the experimental part, this paper compares the proposed Cobra model and the open source SOTA VLM model on the basic benchmark, and compares it with the same The magnitude is based on the answering speed of the VLM model based on the Transformer architecture. At the same time, the generating speed and performance comparison of the graph at the same time, the COBRA is also the four open VQA tasks of VQA-V2, GQA, Vizwiz, TextVQA, and VSR, POPE two For a closed set prediction task, scores were compared on a total of 6 benchmarks. The comparison of the map on the Benchmark and other open source models

Qualitative test

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

In addition, Cobra also gives two VQA examples to qualitatively illustrate the Cobra in the object of the object. Superiority in the ability to recognize spatial relationships and reduce model illusion.和 Figure COBRA and other baseline models in the judgment of object spatial relations

和 Figure Cobra and other baseline models in the example of visual illusion
In examples, Llava V1.5 and Mobilevlm are given an error answer, while COBRA does An accurate description was given, especially in the second instance, Cobra accurately identified that the picture came from the robot's simulation environment.

Ablation experiment
This article conducts ablation research on the solution adopted by Cobra from the two dimensions of performance and generation speed. The experimental plan conducts ablation experiments on the projector, visual encoder, and LLM language backbone respectively. The performance comparison of the performance of the diagram ablation experiment shows that the ablation experiments of the project part of the projector show that the effect of the MLP projector adopted in this article is significantly better than dedicated to reducing the number of visual Token to The LDP module improves the computing speed. At the same time, because Cobra's sequence processing speed and computational complexity are better than Transformer, the LDP module has no obvious advantage in generation speed. Therefore, the Mamba class model is used to reduce the number of visual tokens by sacrificing accuracy. The sampler may not be necessary.和 Figure COBRA and other models in the range of generating speed comparison

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

The ablation results of the visual encoder part show that the fusion of Dinov2 features effectively improves the performance of COBRA. In the language backbone experiment, the Mamba language model without instruction fine-tuning was completely unable to give reasonable answers in the open question and answer test, while the fine-tuned Mamba language model can achieve considerable performance on various tasks.
Conclusion

This paper proposes Cobra, which solves the efficiency bottleneck of existing multi-modal large-scale language models that rely on Transformer networks with quadratic computational complexity. This paper explores the combination of language models with linear computational complexity and multimodal input. In terms of fusing visual and language information, this paper successfully optimizes the internal information integration of the Mamba language model and achieves more effective multi-modal representation through in-depth research on different modal fusion schemes. Experiments show that Cobra not only significantly improves computational efficiency, but is also comparable in performance to advanced models such as LLaVA, especially in overcoming visual illusions and spatial relationship judgments. It even significantly reduces the number of parameters. This opens up new possibilities for future deployment of high-performance AI models in environments that require high-frequency processing of visual information, such as vision-based robot feedback control. The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source

The above is the detailed content of The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days The author of ControlNet has another hit! The whole process of generating a painting from a picture, earning 1.4k stars in two days Jul 17, 2024 am 01:56 AM

It is also a Tusheng video, but PaintsUndo has taken a different route. ControlNet author LvminZhang started to live again! This time I aim at the field of painting. The new project PaintsUndo has received 1.4kstar (still rising crazily) not long after it was launched. Project address: https://github.com/lllyasviel/Paints-UNDO Through this project, the user inputs a static image, and PaintsUndo can automatically help you generate a video of the entire painting process, from line draft to finished product. follow. During the drawing process, the line changes are amazing. The final video result is very similar to the original image: Let’s take a look at a complete drawing.

Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Topping the list of open source AI software engineers, UIUC's agent-less solution easily solves SWE-bench real programming problems Jul 17, 2024 pm 10:02 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com The authors of this paper are all from the team of teacher Zhang Lingming at the University of Illinois at Urbana-Champaign (UIUC), including: Steven Code repair; Deng Yinlin, fourth-year doctoral student, researcher

From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' From RLHF to DPO to TDPO, large model alignment algorithms are already 'token-level' Jun 24, 2024 pm 03:04 PM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com In the development process of artificial intelligence, the control and guidance of large language models (LLM) has always been one of the core challenges, aiming to ensure that these models are both powerful and safe serve human society. Early efforts focused on reinforcement learning methods through human feedback (RL

Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Posthumous work of the OpenAI Super Alignment Team: Two large models play a game, and the output becomes more understandable Jul 19, 2024 am 01:29 AM

If the answer given by the AI ​​model is incomprehensible at all, would you dare to use it? As machine learning systems are used in more important areas, it becomes increasingly important to demonstrate why we can trust their output, and when not to trust them. One possible way to gain trust in the output of a complex system is to require the system to produce an interpretation of its output that is readable to a human or another trusted system, that is, fully understandable to the point that any possible errors can be found. For example, to build trust in the judicial system, we require courts to provide clear and readable written opinions that explain and support their decisions. For large language models, we can also adopt a similar approach. However, when taking this approach, ensure that the language model generates

A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated A significant breakthrough in the Riemann Hypothesis! Tao Zhexuan strongly recommends new papers from MIT and Oxford, and the 37-year-old Fields Medal winner participated Aug 05, 2024 pm 03:32 PM

Recently, the Riemann Hypothesis, known as one of the seven major problems of the millennium, has achieved a new breakthrough. The Riemann Hypothesis is a very important unsolved problem in mathematics, related to the precise properties of the distribution of prime numbers (primes are those numbers that are only divisible by 1 and themselves, and they play a fundamental role in number theory). In today's mathematical literature, there are more than a thousand mathematical propositions based on the establishment of the Riemann Hypothesis (or its generalized form). In other words, once the Riemann Hypothesis and its generalized form are proven, these more than a thousand propositions will be established as theorems, which will have a profound impact on the field of mathematics; and if the Riemann Hypothesis is proven wrong, then among these propositions part of it will also lose its effectiveness. New breakthrough comes from MIT mathematics professor Larry Guth and Oxford University

arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it arXiv papers can be posted as 'barrage', Stanford alphaXiv discussion platform is online, LeCun likes it Aug 01, 2024 pm 05:18 PM

cheers! What is it like when a paper discussion is down to words? Recently, students at Stanford University created alphaXiv, an open discussion forum for arXiv papers that allows questions and comments to be posted directly on any arXiv paper. Website link: https://alphaxiv.org/ In fact, there is no need to visit this website specifically. Just change arXiv in any URL to alphaXiv to directly open the corresponding paper on the alphaXiv forum: you can accurately locate the paragraphs in the paper, Sentence: In the discussion area on the right, users can post questions to ask the author about the ideas and details of the paper. For example, they can also comment on the content of the paper, such as: "Given to

The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source The first Mamba-based MLLM is here! Model weights, training code, etc. have all been open source Jul 17, 2024 am 02:46 AM

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com. Introduction In recent years, the application of multimodal large language models (MLLM) in various fields has achieved remarkable success. However, as the basic model for many downstream tasks, current MLLM consists of the well-known Transformer network, which

Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Axiomatic training allows LLM to learn causal reasoning: the 67 million parameter model is comparable to the trillion parameter level GPT-4 Jul 17, 2024 am 10:14 AM

Show the causal chain to LLM and it learns the axioms. AI is already helping mathematicians and scientists conduct research. For example, the famous mathematician Terence Tao has repeatedly shared his research and exploration experience with the help of AI tools such as GPT. For AI to compete in these fields, strong and reliable causal reasoning capabilities are essential. The research to be introduced in this article found that a Transformer model trained on the demonstration of the causal transitivity axiom on small graphs can generalize to the transitive axiom on large graphs. In other words, if the Transformer learns to perform simple causal reasoning, it may be used for more complex causal reasoning. The axiomatic training framework proposed by the team is a new paradigm for learning causal reasoning based on passive data, with only demonstrations

See all articles