Home Technology peripherals AI Yi-VL large model is open source and ranks first in MMMU and CMMMU

Yi-VL large model is open source and ranks first in MMMU and CMMMU

Jan 22, 2024 pm 09:30 PM
industry 010,000 things

On January 22, the Yi series model family welcomed a new member: Yi Vision Language (Yi-VL) multi-modal language large model is officially open source to the world. It is reported that the Yi-VL model is developed based on the Yi language model, including two versions: Yi-VL-34B and Yi-VL-6B.

Yi-VL model open source address:
  • https://huggingface.co/01-ai
  • https://www.modelscope.cn/organization/01ai

##With excellent image and text understanding and dialogue generation Ability, the Yi-VL model has achieved leading results on the English data set MMMU and the Chinese data set CMMMU, demonstrating its strong strength in complex interdisciplinary tasks.

MMMU (full name Massive Multi-discipline Multi-modal Understanding & Reasoning) data set contains 11,500 data from six core Problems in disciplines (Art & Design, Business, Science, Health & Medicine, Humanities & Social Sciences, and Technology & Engineering) involving highly heterogeneous image types and intertwined text-image information pose extremely high demands on the model's advanced perception and reasoning capabilities. Require. On this test set,
Yi-VL-34B surpassed a series of multi-modal large models with an accuracy of 41.6%, second only to GPT-4V (55.7%), showing strong cross- Ability to understand and apply subject knowledge.

Yi-VL large model is open source and ranks first in MMMU and CMMMU

Yi-VL large model is open source and ranks first in MMMU and CMMMU

Source: https://mmmu-benchmark.github.io

On the CMMMU data set created for the Chinese scene, the Yi-VL model shows the unique advantage of "understanding Chinese people better". CMMMU contains approximately 12,000 Chinese multimodal questions derived from university exams, quizzes, and textbooks. Among them,
GPT-4V has an accuracy of 43.7% on this test set, followed closely by Yi-VL-34B with an accuracy of 36.5%, ranking among the existing open source multi-modal models. Leading position.

Yi-VL large model is open source and ranks first in MMMU and CMMMU

Yi-VL large model is open source and ranks first in MMMU and CMMMU

## Source: https://cmmmu-benchmark.github.io/

So, how does the Yi-VL model perform in diverse scenarios such as graphic and text dialogues?

Let’s look at two examples first:

Yi-VL large model is open source and ranks first in MMMU and CMMMU

Yi-VL large model is open source and ranks first in MMMU and CMMMUAs you can see, Based on the powerful text understanding capabilities of the Yi language model, you can get a good multi-modal visual language model by simply aligning the pictures - this is also one of the core highlights of the Yi-VL model.

Yi-VL model architecture design and training method process overview.

In terms of architecture design, the Yi-VL model is based on the open source LLaVA architecture and contains three Main module:

  • Vision Transformer (ViT for short) is used for image encoding, using the open source OpenClip ViT-H/14 model to initialize trainable parameters. By learning to extract features from large-scale "image-text" pairs, the model has the ability to process and understand images.
  • The Projection module brings the ability to spatially align image features with text features to the model. This module consists of a Multilayer Perceptron (MLP) containing layer normalizations. This design allows the model to more effectively fuse and process visual and text information, improving the accuracy of multi-modal understanding and generation.
  • The introduction of Yi-34B-Chat and Yi-6B-Chat large-scale language models provides Yi-VL with powerful language understanding and generation capabilities. This part of the model uses advanced natural language processing technology to help Yi-VL deeply understand complex language structures and generate coherent and relevant text output.

In terms of training method, the training process of the Yi-VL model is divided into three carefully designed stages, aiming to Comprehensively improve the model’s visual and language processing capabilities.

  • The first stage: Zero One Wish uses 100 million "image-text" paired data sets to train ViT and Projection modules. At this stage, the image resolution is set to 224x224 to enhance ViT’s knowledge acquisition capabilities in specific architectures while enabling efficient alignment with large language models.
  • The second stage: Zero One Thing increases the image resolution of ViT to 448x448. This improvement makes the model better at recognizing complex visual details. This stage uses approximately 25 million image-text pairs.
  • The third stage: Zero One Wish opens the parameters of the entire model for training, with the goal of improving the model's performance in multi-modal chat interaction. The training data covers a diverse range of data sources, with a total of approximately 1 million "image-text" pairs, ensuring the breadth and balance of the data.

The zero-yiwu technical team also verified that it can be trained with other multi-modal methods based on the powerful language understanding and generation capabilities of the Yi language model. Methods such as BLIP, Flamingo, EVA, etc. can quickly train multi-modal graphic and text models that can perform efficient image understanding and smooth graphic and text dialogue. The Yi series models can be used as base language models for multimodal models, providing a new option for the open source community.

Currently, the Yi-VL model has been opened to the public on Hugging Face, ModelScope and other platforms. Users can experience the multiple functions of this model through the following link: Excellent performance in the scene. Welcome to explore the powerful functions of Yi-VL multi-modal language model and experience cutting-edge AI technology achievements.

The above is the detailed content of Yi-VL large model is open source and ranks first in MMMU and CMMMU. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

The first large UI model in China is released! Motiff's large model creates the best assistant for designers and optimizes UI design workflow The first large UI model in China is released! Motiff's large model creates the best assistant for designers and optimizes UI design workflow Aug 19, 2024 pm 04:48 PM

Artificial intelligence is developing faster than you might imagine. Since GPT-4 introduced multimodal technology into the public eye, multimodal large models have entered a stage of rapid development, gradually shifting from pure model research and development to exploration and application in vertical fields, and are deeply integrated with all walks of life. In the field of interface interaction, international technology giants such as Google and Apple have invested in the research and development of large multi-modal UI models, which is regarded as the only way forward for the mobile phone AI revolution. In this context, the first large-scale UI model in China was born. On August 17, at the IXDC2024 International Experience Design Conference, Motiff, a design tool in the AI ​​era, launched its independently developed UI multi-modal model - Motiff Model. This is the world's first UI design tool

See all articles