Home > Technology peripherals > AI > Doubao Big Model Team releases new Detail Image Caption evaluation benchmark to improve the reliability of VLM Caption evaluation

Doubao Big Model Team releases new Detail Image Caption evaluation benchmark to improve the reliability of VLM Caption evaluation

WBOY
Release: 2024-07-18 20:10:02
Original
754 people have browsed it
豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The current visual language model (VLM) mainly conducts performance evaluation through QA question and answer form, but lacks evaluation of the basic understanding ability of the model, such as details image caption A reliable measure of performance.

In response to this problem, the Chinese Academy of Sciences, Peking University and Byte Doubao Big Model Team released the DetailCaps-4870 data set and proposed an effective evaluation index CAPTURE, which achieved the highest expert evaluation consensus among open source evaluation indexes. It is highly reliable and achieves results comparable to GPT-Eval at low cost.

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

  • Paper: https://arxiv.org/abs/2405.19092
  • Dataset: https://huggingface.co/datasets/foundation-multimodal-models/DetailCaps-4870
  • Code: https://github.com/foundation-multimodal-models/CAPTURE

Introduction

The current LVLM (large vision-language model) evaluation has the following problems:

  • The existing LVLM evaluation solution mainly adopts the VQA form, which is greatly affected by the ability to follow instructions, and the design of QA prompts can easily introduce human bias.
  • Image caption task can effectively evaluate model understanding ability, but existing caption benchmarks mostly use short captions as ground truth, which is completely outdated in the lvlm era.
  • At the same time, the existing image caption evaluation indicators have poor consistency with the evaluation results of experts such as humans and GPT. Commonly used indicators such as bleu and rouge extract n-grams for matching, which are not sensitive enough to the accuracy of key information. Although GPT-Eval is more consistent with expert evaluation, it will bring high evaluation costs.

In response to these problems, this research proposes a new Detail image caption benchmark and evaluation metric to achieve an accurate evaluation of LVLM image understanding capabilities at a lower cost.

Under the guidance of the proposed evaluation data set & indicators, this research also explored the data construction method of exploring LVLM's own capabilities for detail image caption, effectively improving the quality of detail caption data. C Figure 1: The left side is the Capture Metric instance display, and the right side is the Detail Caption Construction method. , the Detail image caption benchmark proposed by this study has a longer text length, a significantly larger number of non-repetitive 2-grams, and contains richer visual information:

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

Table 1: DetailCaps benchmark statistical information
tion evaluation by exTracing and coUpling co
RE
information) indicator conducts caption quality assessment through 4 steps. As shown in the figure below, first use Factual praser [1] to extract the object, attribute, relation elements in the detail caption, and then filter out objects that have no practical significance. After that, the matching scores (F1 scores) of the obj, attr, and rel elements are calculated through three stages of matching (exact matching, synonym matching, and embedding matching), and weighted as the final result.

                                                                                                                                                                     Under the guidance of DetailCaps benchmark and CAPTURE metric, this research proposes a method based on The divide-and-conquer method explores the potential of LVLM for data synthesis, effectively improving the quality of detail caption data. This solution first uses LVLM to generate full-image captions, and then uses filtering methods such as segmentation model (SAM [2]) and clustering to find key positions in the image and crop them out for local caption generation. The article uses a word-level filtering method to reduce hallucinations in captions. This method first parses out the words and phrases that describe the visual elements in the image, and then filters out low-scoring images through a target detection model (Owlv2 [3]). Elements. Finally, the filtered full image caption and local caption are sent to LLM (LLaMA2 [4]) to be fused into the final image description. Experiment

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

CAPTURE indicator
(1) CAPTURE vs other caption indicators
This study is in DetailCaps-1 00 (manual annotation of reference caption, manual evaluation of the model Captions generated by three models: LLaVA-1.5 [5], CogVLM [6] and ShareCaptioner [7] were tested on Experts score and calculate the consistency between each evaluation indicator and the expert evaluation:

                                                                                                                                                                            ’ ‐ to ’ ’ s t------ Evaluation indicators Consistency with expert ratings is measured by pearson correlation (linear correlation), R^2 (absolute magnitude), Kendall's tau (partially ordered pair consistency), and (Per-) Sample (Kendall's) tau (calculated separately for each sample) average) indicators are measured.
The results show that CAPTURE has achieved the best expert evaluation consistency in various indicators. Among these indicators, the calculation method of Sample tau is closest to the actual detail image caption evaluation. CAPTURE is also the only method that is close to GPT4-Eval on this indicator, achieving a good balance between the accuracy and cost of evaluation.

(2) Ablation analysis

The researchers also conducted ablation analysis on each module in CAPTURE and verified its effectiveness:
                                                                                                                                    Table 3: Ablation analysis of each module of CAPTURE

The experimental results show that Stop words effectively improves Sample tau, which illustrates the effectiveness of this module. However, stop words filtering will have different effects on the detail caption of different samples, resulting in a slight decrease in pcc and kendall tau. Soft matching also improves the sample tau, and has a significant gain effect on the 1-R2 score, aligning the CAPTURE prediction score with the absolute score scored by experts. When calculating the final score in a weighted manner, the default ratio of obj:attr:rel is 5:5:2, which is optimal. Increasing or decreasing the proportion of each element will cause performance degradation. (3) Detail caption performance of open source LVLM

Overall, InternVL-1.5 is the current performance The best open source LVLM. It can be seen from the results of LLaVA and MiniGemini that increasing the number of LLM parameters has a consistent effect on improving the model's detail caption capabilities. At the same time, models with higher resolution and trained with high-quality detail captions will perform better.

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性

Detail caption data structure
Based on the detail caption evaluation data set and evaluation indicators, the researchers verified the effectiveness of the proposed detail caption data synthesis scheme.
(1) The effectiveness of the detailed caption synthesis method on different LVLMs

As shown in the table below, the detail caption synthesis method proposed in this study is effective on LLaVA-1.5-7B, LLaVA-1.5-13B, LLaVA-NEXT-7B and Mini-Gemini-7B-HD achieved consistent detail caption quality improvement:

                                                                                                                                                                                 -- to achieve consistent detail caption quality improvement on LLaVA-NEXT-7B and Mini-Gemini-7B-HD:


豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性 (2) Further improve detail caption performance through Self-loop

The researchers also tried to further improve LVLM detail by performing Self-loop through the training process of data labeling -> model training -> re-labeling caption Performance has achieved positive results in all four loops. At the same time, comparing the open source solution [8] with the word-level hallucination filtering solution proposed in this article proves the effectiveness of its design:

                                                                                                                        Table 6: Self-looping effect and ablation analysis of the Detail caption synthesis scheme

豆包大模型团队发布全新Detail Image Caption评估基准,提升VLM Caption评测可靠性 (3) LVLM’s self-marked detail caption can improve its overall performance

This study used LLaVA-1.5 to conduct sharegpt4v-100k data according to the given detail caption construction plan. Re-marked and used the marked data for SFT training of LLaVA-1.5, achieving consistent performance improvements on multiple benchmarks:
                             表七:合成 detail caption 数据在 LLaVA-1.5-7B 模型训练中的作用

参考文献
[1] Zhuang Li, Yuyang Chai, Terry Zhuo Yue, Lizhen Qu, Gholamreza Haffari, Fei Li, Donghong Ji, and Quan Hung Tran. Factual: A benchmark for faithful and consistent textual scene graph parsing. arXiv:2305.17497, 2023
[2] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. ICCV 2023
[3] Matthias Minderer, Alexey Gritsenko, and Neil Houlsby. Scaling open-vocabulary object detection. NIPS 2024
[4] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288, 2023
[5] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
[6] Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. Cogvlm: Visual expert for pretrained language models. arXiv:2311.03079, 2023
[7] Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. Sharegpt4v: Improving large multi-modal models with better captions. arXiv:2311.12793, 2023
[8] Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, and Xiang Bai. Monkey: Image resolution and text label are important things for large multi-modal models. arXiv:2311.06607, 2023

豆包大模型团队

字节跳动豆包大模型团队成立于 2023 年,致力于开发业界最先进的 AI 大模型技术,成为世界一流的研究团队,为科技和社会发展作出贡献。

豆包大模型团队在 AI 领域拥有长期愿景与决心,研究方向涵盖 NLP、CV、语音等,在中国、新加坡、美国等地设有实验室和研究岗位。团队依托平台充足的数据、计算等资源,在相关领域持续投入,已推出自研通用大模型,提供多模态能力,下游支持豆包、扣子、即梦等 50 + 业务,并通过火山引擎开放给企业客户。目前,豆包 APP 已成为中国市场用户量最大的 AIGC 应用。欢迎加入字节跳动豆包大模型团队。

https://mp.weixin.qq.com/s/ZjQ-v6reZXhBP6G27cbmlQ

The above is the detailed content of Doubao Big Model Team releases new Detail Image Caption evaluation benchmark to improve the reliability of VLM Caption evaluation. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template