Home Technology peripherals AI Doubao Big Model Team releases new Detail Image Caption evaluation benchmark to improve the reliability of VLM Caption evaluation

Doubao Big Model Team releases new Detail Image Caption evaluation benchmark to improve the reliability of VLM Caption evaluation

Jul 18, 2024 pm 08:10 PM
getting Started CAPTURE

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The current visual language model (VLM) mainly conducts performance evaluation through QA question and answer form, but lacks evaluation of the basic understanding ability of the model, such as details image caption A reliable measure of performance.

In response to this problem, the Chinese Academy of Sciences, Peking University and Byte Doubao Big Model Team released the DetailCaps-4870 data set and proposed an effective evaluation index CAPTURE, which achieved the highest expert evaluation consensus among open source evaluation indexes. It is highly reliable and achieves results comparable to GPT-Eval at low cost.

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

  • Paper: https://arxiv.org/abs/2405.19092
  • Dataset: https://huggingface.co/datasets/foundation-multimodal-models/DetailCaps-4870
  • Code: https://github.com/foundation-multimodal-models/CAPTURE

Introduction

The current LVLM (large vision-language model) evaluation has the following problems:

  • The existing LVLM evaluation solution mainly adopts the VQA form, which is greatly affected by the ability to follow instructions, and the design of QA prompts can easily introduce human bias.
  • Image caption task can effectively evaluate model understanding ability, but existing caption benchmarks mostly use short captions as ground truth, which is completely outdated in the lvlm era.
  • At the same time, the existing image caption evaluation indicators have poor consistency with the evaluation results of experts such as humans and GPT. Commonly used indicators such as bleu and rouge extract n-grams for matching, which are not sensitive enough to the accuracy of key information. Although GPT-Eval is more consistent with expert evaluation, it will bring high evaluation costs.

In response to these problems, this research proposes a new Detail image caption benchmark and evaluation metric to achieve an accurate evaluation of LVLM image understanding capabilities at a lower cost.

Under the guidance of the proposed evaluation data set & indicators, this research also explored the data construction method of exploring LVLM's own capabilities for detail image caption, effectively improving the quality of detail caption data. C Figure 1: The left side is the Capture Metric instance display, and the right side is the Detail Caption Construction method. , the Detail image caption benchmark proposed by this study has a longer text length, a significantly larger number of non-repetitive 2-grams, and contains richer visual information:

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

Table 1: DetailCaps benchmark statistical information
tion evaluation by exTracing and coUpling co
RE
information) indicator conducts caption quality assessment through 4 steps. As shown in the figure below, first use Factual praser [1] to extract the object, attribute, relation elements in the detail caption, and then filter out objects that have no practical significance. After that, the matching scores (F1 scores) of the obj, attr, and rel elements are calculated through three stages of matching (exact matching, synonym matching, and embedding matching), and weighted as the final result.

                                                                                                                                                                     Under the guidance of DetailCaps benchmark and CAPTURE metric, this research proposes a method based on The divide-and-conquer method explores the potential of LVLM for data synthesis, effectively improving the quality of detail caption data. This solution first uses LVLM to generate full-image captions, and then uses filtering methods such as segmentation model (SAM [2]) and clustering to find key positions in the image and crop them out for local caption generation. The article uses a word-level filtering method to reduce hallucinations in captions. This method first parses out the words and phrases that describe the visual elements in the image, and then filters out low-scoring images through a target detection model (Owlv2 [3]). Elements. Finally, the filtered full image caption and local caption are sent to LLM (LLaMA2 [4]) to be fused into the final image description. Experiment

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

CAPTURE indicator
(1) CAPTURE vs other caption indicators
This study is in DetailCaps-1 00 (manual annotation of reference caption, manual evaluation of the model Captions generated by three models: LLaVA-1.5 [5], CogVLM [6] and ShareCaptioner [7] were tested on Experts score and calculate the consistency between each evaluation indicator and the expert evaluation:

                                                                                                                                                                            ’ ‐ to ’ ’ s t------ Evaluation indicators Consistency with expert ratings is measured by pearson correlation (linear correlation), R^2 (absolute magnitude), Kendall's tau (partially ordered pair consistency), and (Per-) Sample (Kendall's) tau (calculated separately for each sample) average) indicators are measured.
The results show that CAPTURE has achieved the best expert evaluation consistency in various indicators. Among these indicators, the calculation method of Sample tau is closest to the actual detail image caption evaluation. CAPTURE is also the only method that is close to GPT4-Eval on this indicator, achieving a good balance between the accuracy and cost of evaluation.

(2) Ablation analysis

The researchers also conducted ablation analysis on each module in CAPTURE and verified its effectiveness:
                                                                                                                                    Table 3: Ablation analysis of each module of CAPTURE

The experimental results show that Stop words effectively improves Sample tau, which illustrates the effectiveness of this module. However, stop words filtering will have different effects on the detail caption of different samples, resulting in a slight decrease in pcc and kendall tau. Soft matching also improves the sample tau, and has a significant gain effect on the 1-R2 score, aligning the CAPTURE prediction score with the absolute score scored by experts. When calculating the final score in a weighted manner, the default ratio of obj:attr:rel is 5:5:2, which is optimal. Increasing or decreasing the proportion of each element will cause performance degradation. (3) Detail caption performance of open source LVLM

Overall, InternVL-1.5 is the current performance The best open source LVLM. It can be seen from the results of LLaVA and MiniGemini that increasing the number of LLM parameters has a consistent effect on improving the model's detail caption capabilities. At the same time, models with higher resolution and trained with high-quality detail captions will perform better.

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

Detail caption data structure
Based on the detail caption evaluation data set and evaluation indicators, the researchers verified the effectiveness of the proposed detail caption data synthesis scheme.
(1) The effectiveness of the detailed caption synthesis method on different LVLMs

As shown in the table below, the detail caption synthesis method proposed in this study is effective on LLaVA-1.5-7B, LLaVA-1.5-13B, LLaVA-NEXT-7B and Mini-Gemini-7B-HD achieved consistent detail caption quality improvement:

                                                                                                                                                                                 -- to achieve consistent detail caption quality improvement on LLaVA-NEXT-7B and Mini-Gemini-7B-HD:


豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性 (2) Further improve detail caption performance through Self-loop

The researchers also tried to further improve LVLM detail by performing Self-loop through the training process of data labeling -> model training -> re-labeling caption Performance has achieved positive results in all four loops. At the same time, comparing the open source solution [8] with the word-level hallucination filtering solution proposed in this article proves the effectiveness of its design:

                                                                                                                        Table 6: Self-looping effect and ablation analysis of the Detail caption synthesis scheme

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性 (3) LVLM’s self-marked detail caption can improve its overall performance

This study used LLaVA-1.5 to conduct sharegpt4v-100k data according to the given detail caption construction plan. Re-marked and used the marked data for SFT training of LLaVA-1.5, achieving consistent performance improvements on multiple benchmarks:
                             表七:合成 detail caption 数据在 LLaVA-1.5-7B 模型训练中的作用

参考文献
[1] Zhuang Li, Yuyang Chai, Terry Zhuo Yue, Lizhen Qu, Gholamreza Haffari, Fei Li, Donghong Ji, and Quan Hung Tran. Factual: A benchmark for faithful and consistent textual scene graph parsing. arXiv:2305.17497, 2023
[2] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. ICCV 2023
[3] Matthias Minderer, Alexey Gritsenko, and Neil Houlsby. Scaling open-vocabulary object detection. NIPS 2024
[4] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288, 2023
[5] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
[6] Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. Cogvlm: Visual expert for pretrained language models. arXiv:2311.03079, 2023
[7] Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. arXiv:2311.12793, 2023
[8] Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, and Xiang Bai. Monkey: Image resolution and text label are important things for large multi-modal models. arXiv:2311.06607, 2023

豆包大模型团队

字节跳动豆包大模型团队成立于 2023 年,致力于开发业界最先进的 AI 大模型技术,成为世界一流的研究团队,为科技和社会发展作出贡献。

豆包大模型团队在 AI 领域拥有长期愿景与决心,研究方向涵盖 NLP、CV、语音等,在中国、新加坡、美国等地设有实验室和研究岗位。团队依托平台充足的数据、计算等资源,在相关领域持续投入,已推出自研通用大模型,提供多模态能力,下游支持豆包、扣子、即梦等 50 + 业务,并通过火山引擎开放给企业客户。目前,豆包 APP 已成为中国市场用户量最大的 AIGC 应用。欢迎加入字节跳动豆包大模型团队。

https://mp.weixin.qq.com/s/ZjQ-v6reZXhBP6G27cbmlQ

The above is the detailed content of Doubao Big Model Team releases new Detail Image Caption evaluation benchmark to improve the reliability of VLM Caption evaluation. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

Java Tutorial
1657
14
PHP Tutorial
1257
29
C# Tutorial
1230
24
A Diffusion Model Tutorial Worth Your Time, from Purdue University A Diffusion Model Tutorial Worth Your Time, from Purdue University Apr 07, 2024 am 09:01 AM

Diffusion can not only imitate better, but also "create". The diffusion model (DiffusionModel) is an image generation model. Compared with the well-known algorithms such as GAN and VAE in the field of AI, the diffusion model takes a different approach. Its main idea is a process of first adding noise to the image and then gradually denoising it. How to denoise and restore the original image is the core part of the algorithm. The final algorithm is able to generate an image from a random noisy image. In recent years, the phenomenal growth of generative AI has enabled many exciting applications in text-to-image generation, video generation, and more. The basic principle behind these generative tools is the concept of diffusion, a special sampling mechanism that overcomes the limitations of previous methods.

Generate PPT with one click! Kimi: Let the 'PPT migrant workers' become popular first Generate PPT with one click! Kimi: Let the 'PPT migrant workers' become popular first Aug 01, 2024 pm 03:28 PM

Kimi: In just one sentence, in just ten seconds, a PPT will be ready. PPT is so annoying! To hold a meeting, you need to have a PPT; to write a weekly report, you need to have a PPT; to make an investment, you need to show a PPT; even when you accuse someone of cheating, you have to send a PPT. College is more like studying a PPT major. You watch PPT in class and do PPT after class. Perhaps, when Dennis Austin invented PPT 37 years ago, he did not expect that one day PPT would become so widespread. Talking about our hard experience of making PPT brings tears to our eyes. "It took three months to make a PPT of more than 20 pages, and I revised it dozens of times. I felt like vomiting when I saw the PPT." "At my peak, I did five PPTs a day, and even my breathing was PPT." If you have an impromptu meeting, you should do it

All CVPR 2024 awards announced! Nearly 10,000 people attended the conference offline, and a Chinese researcher from Google won the best paper award All CVPR 2024 awards announced! Nearly 10,000 people attended the conference offline, and a Chinese researcher from Google won the best paper award Jun 20, 2024 pm 05:43 PM

In the early morning of June 20th, Beijing time, CVPR2024, the top international computer vision conference held in Seattle, officially announced the best paper and other awards. This year, a total of 10 papers won awards, including 2 best papers and 2 best student papers. In addition, there were 2 best paper nominations and 4 best student paper nominations. The top conference in the field of computer vision (CV) is CVPR, which attracts a large number of research institutions and universities every year. According to statistics, a total of 11,532 papers were submitted this year, and 2,719 were accepted, with an acceptance rate of 23.6%. According to Georgia Institute of Technology’s statistical analysis of CVPR2024 data, from the perspective of research topics, the largest number of papers is image and video synthesis and generation (Imageandvideosyn

PyCharm Community Edition Installation Guide: Quickly master all the steps PyCharm Community Edition Installation Guide: Quickly master all the steps Jan 27, 2024 am 09:10 AM

Quick Start with PyCharm Community Edition: Detailed Installation Tutorial Full Analysis Introduction: PyCharm is a powerful Python integrated development environment (IDE) that provides a comprehensive set of tools to help developers write Python code more efficiently. This article will introduce in detail how to install PyCharm Community Edition and provide specific code examples to help beginners get started quickly. Step 1: Download and install PyCharm Community Edition To use PyCharm, you first need to download it from its official website

From bare metal to a large model with 70 billion parameters, here is a tutorial and ready-to-use scripts From bare metal to a large model with 70 billion parameters, here is a tutorial and ready-to-use scripts Jul 24, 2024 pm 08:13 PM

We know that LLM is trained on large-scale computer clusters using massive data. This site has introduced many methods and technologies used to assist and improve the LLM training process. Today, what we want to share is an article that goes deep into the underlying technology and introduces how to turn a bunch of "bare metals" without even an operating system into a computer cluster for training LLM. This article comes from Imbue, an AI startup that strives to achieve general intelligence by understanding how machines think. Of course, turning a bunch of "bare metal" without an operating system into a computer cluster for training LLM is not an easy process, full of exploration and trial and error, but Imbue finally successfully trained an LLM with 70 billion parameters. and in the process accumulate

AI in use | AI created a life vlog of a girl living alone, which received tens of thousands of likes in 3 days AI in use | AI created a life vlog of a girl living alone, which received tens of thousands of likes in 3 days Aug 07, 2024 pm 10:53 PM

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Video link: https://mp.weixin.qq.com/s/2hX_i7li3RqdE4u016yGhQ Recently, the life vlog of a girl living alone became popular on Xiaohongshu. An illustration-style animation, coupled with a few healing words, can be easily picked up in just a few days.

A must-read for technical beginners: Analysis of the difficulty levels of C language and Python A must-read for technical beginners: Analysis of the difficulty levels of C language and Python Mar 22, 2024 am 10:21 AM

Title: A must-read for technical beginners: Difficulty analysis of C language and Python, requiring specific code examples In today's digital age, programming technology has become an increasingly important ability. Whether you want to work in fields such as software development, data analysis, artificial intelligence, or just learn programming out of interest, choosing a suitable programming language is the first step. Among many programming languages, C language and Python are two widely used programming languages, each with its own characteristics. This article will analyze the difficulty levels of C language and Python

Counting down the 12 pain points of RAG, NVIDIA senior architect teaches solutions Counting down the 12 pain points of RAG, NVIDIA senior architect teaches solutions Jul 11, 2024 pm 01:53 PM

Retrieval-augmented generation (RAG) is a technique that uses retrieval to boost language models. Specifically, before a language model generates an answer, it retrieves relevant information from an extensive document database and then uses this information to guide the generation process. This technology can greatly improve the accuracy and relevance of content, effectively alleviate the problem of hallucinations, increase the speed of knowledge update, and enhance the traceability of content generation. RAG is undoubtedly one of the most exciting areas of artificial intelligence research. For more details about RAG, please refer to the column article on this site "What are the new developments in RAG, which specializes in making up for the shortcomings of large models?" This review explains it clearly." But RAG is not perfect, and users often encounter some "pain points" when using it. Recently, NVIDIA’s advanced generative AI solution

See all articles