Home > Technology peripherals > AI > body text

Evaluate the performance of the LLM4VG benchmark developed by Tsinghua University in video timing positioning

WBOY
Release: 2024-01-04 22:38:14
forward
1098 people have browsed it

清华大学研发 LLM4VG 基准:用于评估 LLM 视频时序定位性能

December 29 news, the reach of large language models (LLM) has expanded from simple natural language processing to multi-modal fields such as text, audio, video, etc. One of the keys is video timing positioning (Video Grounding, VG).

清华大学研发 LLM4VG 基准:用于评估 LLM 视频时序定位性能

The goal of the VG task is to locate the start and end time of the target video segment based on the given query. The core challenge of this task is to accurately determine the time boundaries.

The Tsinghua University research team recently launched the “LLM4VG” benchmark, which is specially designed to evaluate the performance of LLM in VG tasks.

When considering this benchmark, two main strategies were considered. The first strategy is to train a video language model (LLM) directly on the text video dataset (VidLLM). This method learns the association between video and language by training on a large-scale video data set to improve the performance of the model. The second strategy is to combine a traditional language model (LLM) with a pre-trained vision model. This method is based on a pre-trained visual model that combines the visual characteristics of the video. In one strategy, the VidLLM model directly processes the video content and VG task instructions, and performs Its training output predicts text-video relationships.

The second strategy is more complex and involves the use of LLM (Language and Vision Models) and visual description models. These models are able to generate textual descriptions of video content combined with VG (Video Game) task instructions, and these descriptions are implemented with carefully designed prompts. 清华大学研发 LLM4VG 基准:用于评估 LLM 视频时序定位性能

These prompts are carefully designed and their purpose is to effectively combine the instructions of VG and the provided visual description to help LLM process and understand the task-relevant video content.

It is observed that VidLLM, despite being trained directly on video content, still has a large gap in achieving satisfactory VG performance. This finding highlights the need to incorporate more time-related video tasks into training to improve performance.

The second strategy is better than VidLLM, pointing out a promising direction for future research. This strategy is mainly limited by the limitations of the visual model and the design of the cue words, so being able to generate detailed and accurate video descriptions, a more refined graphical model can significantly improve the VG performance of LLM.

清华大学研发 LLM4VG 基准:用于评估 LLM 视频时序定位性能

In summary, this study provides a groundbreaking evaluation of the application of LLM to VG tasks, highlighting the need for more sophisticated methods in model training and cue design.

The reference address of the paper is attached to this site: 清华大学研发 LLM4VG 基准:用于评估 LLM 视频时序定位性能

https://www.php.cn/link/a7fd9fd835f54f0f28003c679fd44b39

The above is the detailed content of Evaluate the performance of the LLM4VG benchmark developed by Tsinghua University in video timing positioning. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!