Home Technology peripherals AI Surpassing the CVPR 2024 method, DynRefer achieves multiple SOTAs in regional-level multi-modal recognition tasks

Surpassing the CVPR 2024 method, DynRefer achieves multiple SOTAs in regional-level multi-modal recognition tasks

Jun 20, 2024 pm 08:31 PM
industry DynRefer

In order to achieve high-precision regional-level multi-modal understanding, this paper proposes a dynamic resolution scheme to simulate the human visual cognitive system.

The author of this article is from the LAMP Laboratory of the University of Chinese Academy of Sciences. The first author Zhao Yuzhong is a doctoral student of the University of Chinese Academy of Sciences in 2023, and the co-author Liu Feng is a direct doctoral student of the University of Chinese Academy of Sciences in 2020. Their main research directions are visual language models and visual object perception.

Introduction

DynRefer significantly improves regional-level multi-modal recognition capabilities by simulating the human visual cognitive process. By introducing the dynamic resolution mechanism of the human eye, DynRefer can simultaneously complete the tasks of region recognition, region attribute detection and region-level captioning with a single model, and achieve SOTA performance in all the above tasks. Among them, 115.7 CIDEr was achieved on the region-level captioning task of the RefCOCOg data set, which is significantly higher than the CVPR 2024 methods such as RegionGPT, GlaMM, Osprey, Alpha-CLIP and so on.

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

  • Paper title: DynRefer: Delving into Region-level Multi-modality Tasks via Dynamic Resolution
  • Paper link: https://arxiv.org/abs/2405.16071
  • Paper code: https ://github.com/callsys/DynRefer

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

Motivation

The region-level multi-modal task is dedicated to converting specified image regions into language descriptions consistent with human preferences. Humans have a resolution-adaptive ability when completing regional-level multi-modal tasks, that is, the area of ​​interest is high-resolution, and the non-attention area is low-resolution. However, current regional-level multi-modal large language models often adopt a fixed-resolution encoding scheme, that is, encoding the entire image, and then extracting regional features through RoI Align. This approach lacks the resolution adaptive capability in the human visual cognitive system, and has low encoding efficiency and ability for areas of interest. In order to achieve high-precision regional-level multi-modal understanding, we propose a dynamic resolution scheme to simulate the human visual cognitive system, as shown in the figure below.

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

区 Figure 1: Comparison of traditional regional multi -modal methods (left) and Dynrefer method (right).

Method

1. Simulate dynamic resolution image (Multi-view construction).
Since the mainstream pre-trained visual language model (CLIP) can only receive uniform resolution input, we simulate a dynamic resolution image by constructing multiple uniform resolution views. The image has high resolution in the referent area and low resolution in the non-reference area. The specific process is shown in Figure 2. The original image x is cropped and resized into multiple candidate views. The cropping area is calculated as
, where . Here represents the bounding box of the reference area, 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA represents the size of the entire image, and t represents the interpolation coefficient. During training, we randomly select n views from candidate views to simulate images generated due to gaze and rapid eye movements. These n views correspond to the interpolation coefficient t, which is 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA. We fixedly retain the view containing only the reference region (i.e. 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA). This view has been experimentally proven to help preserve regional details, which is crucial for all regional multi-modal tasks. 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA
                                            Figure 2: DynRefer training (top) and inference (bottom).

2. Stochastic Multi-view Embedding. The specific process is shown in Figure 3. The sampled n views are encoded into spatial features via frozen CLIP and then processed by the RoI-Align module to obtain region embeddings, i.e., 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA. This is shown on the left side of Figure 3. These region embeddings are not spatially aligned due to spatial errors introduced by cropping, resizing, and RoI-Align. Inspired by the deformable convolution operation, we propose an alignment module to reduce the bias by aligning 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA to 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA, where Surpassing the CVPR 2024 method, DynRefer achieves multiple SOTAs in regional-level multi-modal recognition tasks is the region embedding of the view encoding containing only the reference region. For each region embedding 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA, it is first concatenated with 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA and then a 2D offset map is calculated through a convolutional layer. The spatial features of 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA are then resampled based on the 2D offset. Finally, the aligned region embeddings are concatenated along the channel dimension and fused through linear layers. The output is further compressed through a visual resampling module, i.e. Q-former, which extracts a regional representation of the reference region 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA of the original image x (超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA in Figure 3).

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

                                                                                                                                                                                                          Figure 3: DynRefer network structure

3. Vision-language Alignment. The region representation 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA computed by the stochastic multi-view embedding module is decoded by three decoders 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA as shown in Figure 3 (right), respectively supervised by three multi-modal tasks:

i ) Image region label generation. We employ a lightweight query-based recognition decoder for region label generation. The decoder 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA is shown in Figure 3 (right). The tagging process is completed by calculating the confidence of a predefined tag using the tag as query, 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA as key and value. We parse labels from ground-truth subtitles to supervise the recognition decoder. ii) Region-text contrastive learning. Similar to the region tag decoder, the decoder 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA is defined as a query-based recognition decoder. The decoder computes similarity scores between subtitles and region features, supervised using SigLIP loss. iii) Language modeling. We use a pre-trained large language model 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA to convert the regional representation 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA into a language description.

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

Figure 4: Performance of dual-view (n=2) DynRefer model on region-level multi-modal tasks. Under different interpolation coefficients t, 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA. View one is fixed (超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA), view two is randomly selected or fixed.

4. During the inference process, the trained DynRefer model performs multi-modal tasks on images with dynamic resolution. By adjusting the interpolation coefficients 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA of the sampled n views, we can obtain a regional representation with dynamic resolution characteristics. To evaluate the properties at different dynamic resolutions, we trained a dual-view (n=2) DynRefer model and evaluated it on four multi-modal tasks. As can be seen from the curves in Figure 4, attribute detection achieves better results for views without contextual information (超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA). This can be explained by the fact that such tasks often require detailed regional information. For Region-level captioning and Dense captioning tasks, a context-rich view (超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA) is required to fully understand the reference region. It is important to note that views with too much context (超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA) degrade performance on all tasks because they introduce too much region-irrelevant information. When the task type is known, we can sample appropriate views based on task characteristics. When the task type is unknown, we first construct a set of candidate views under different interpolation coefficients t, 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA. From the candidate set, n views are sampled via a greedy search algorithm. The objective function of the search is defined as:

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTAwhere 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA represents the interpolation coefficient of the i-th view, 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA represents the i-th view, pHASH (・) represents the perceptual image hash function, and 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA represents the XOR operation. In order to compare the information of views from a global perspective, we use the "pHASH (・)" function to convert the views from the spatial domain to the frequency domain and then encode them into hash codes. For this item 超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA, we reduce the weight of context-rich views to avoid introducing too much redundant information.

Experiment

Region-level Captioning

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

In the regional subtitle generation task, DynRefer uses a smaller model (4.2B vs. 7B) on the RefCOCOg and VG datasets, In both METEOR and CIDEr indicators, it significantly surpasses many methods in CVPR 2024, such as RegionGPT, GlaMM, Alpha-CLIP and Osprey, etc., demonstrating the huge performance advantage of DynRefer.

Dense Captioning

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

In the task of dense subtitle generation, on the VG1.2 data set, DynRefer improved 7.1% mAP compared to the previous SOTA method GRiT.

Open Vocabulary Attribute Detection

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

In the regional attribute detection task, DynRefer also achieved SOTA performance.

Open Vocabulary Region Recognition

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

In the region recognition task, DynRefer improves 15% mAP and 8.8% Accuracy compared with RegionGPT of CVPR 24, and is 15.7% mAP higher than ASM of ICLR 24.

Ablation experiment

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

  • Line 1-6: Random dynamic multi-view is better than fixed view.
  • Line 6-10: Selecting views by maximizing information is better than randomly selecting views.
  • Line 10-13: Multi-task training can learn better regional representations.

Visualization

The following pictures show the inference results of DynRefer. DynRefer can use one model to output regional subtitles, tags, attributes and categories at the same time.

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

超越CVPR 2024方法,DynRefer在区域级多模态识别任务上,多项SOTA

The above is the detailed content of Surpassing the CVPR 2024 method, DynRefer achieves multiple SOTAs in regional-level multi-modal recognition tasks. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Distributed Artificial Intelligence Conference DAI 2024 Call for Papers: Agent Day, Richard Sutton, the father of reinforcement learning, will attend! Yan Shuicheng, Sergey Levine and DeepMind scientists will give keynote speeches Aug 22, 2024 pm 08:02 PM

Conference Introduction With the rapid development of science and technology, artificial intelligence has become an important force in promoting social progress. In this era, we are fortunate to witness and participate in the innovation and application of Distributed Artificial Intelligence (DAI). Distributed artificial intelligence is an important branch of the field of artificial intelligence, which has attracted more and more attention in recent years. Agents based on large language models (LLM) have suddenly emerged. By combining the powerful language understanding and generation capabilities of large models, they have shown great potential in natural language interaction, knowledge reasoning, task planning, etc. AIAgent is taking over the big language model and has become a hot topic in the current AI circle. Au

See all articles