Home Technology peripherals AI ICLR 2024 Oral: Noise correlation learning in long videos, single-card training only takes 1 day

ICLR 2024 Oral: Noise correlation learning in long videos, single-card training only takes 1 day

Mar 05, 2024 pm 10:58 PM
industry Video representation learning

In a talk at the 2024 World Economic Forum, Turing Award winner Yann LeCun proposed that models used to process videos should learn to make predictions in an abstract representation space, rather than a specific pixel space [1]. Multimodal video representation learning with the help of text information can extract features that are beneficial to video understanding or content generation, which is a key technology to facilitate this process.

# However, the widespread noise correlation phenomenon between current videos and text descriptions seriously hinders video representation learning. Therefore, in this article, researchers propose a robust long video learning solution based on optimal transmission theory to address this challenge. This paper was accepted by ICLR 2024, the top machine learning conference, for Oral.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

  • Paper title: Multi-granularity Correspondence Learning from Long-term Noisy Videos
  • Paper address: https://openreview.net/pdf?id=9Cu8MRmhq2
  • Project address: https://lin-yijie.github.io/projects/Norton
  • Code address: https://github.com/XLearning-SCU/2024-ICLR-Norton

##Background Challenges

#Video representation learning is one of the hottest problems in multimodal research. Large-scale video-language pre-training has achieved remarkable results in a variety of video understanding tasks, such as video retrieval, visual question answering, segment segmentation and localization, etc. At present, most video-language pre-training work is mainly focused on segment understanding of short videos, ignoring the long-term relationships and dependencies that exist in long videos.

As shown in Figure 1 below, the core difficulty in long video learning is how to encode the temporal dynamics in the video. The current solution mainly focuses on designing a customized video network encoder To capture long-term dependencies [2], but usually faces a large resource overhead.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Figure 1: Example of long video data [2]. The video contains a complex storyline and rich temporal dynamics. Each sentence can only describe a short fragment, and understanding the entire video requires long-term correlation reasoning capabilities.

Since long videos usually use automatic language recognition (ASR) to obtain corresponding text subtitles, the text paragraph (Paragraph) corresponding to the entire video can be The ASR text timestamp is divided into multiple short text titles (Caption), and the long video (Video) can be divided into multiple video clips (Clip) accordingly. The strategy of late-stage fusion or alignment of video clips and titles is more efficient than directly encoding the entire video, and is an optimal solution for long-term temporal association learning.

However,
Noisy correspondence [3-4], NC) exists widely between video clips and text sentences, that is, the video content and Text corpora are incorrectly mapped/related to each other. As shown in Figure 2 below, there will be multi-granularity noise correlation problems between video and text.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Figure 2: Multi-granularity noise correlation. In this example, the video content is divided into 6 pieces based on the text title. (Left) A green timeline indicates that the text can be aligned with the content of the video, while a red timeline indicates that the text cannot be aligned with the content of the entire video. The green text in t5 indicates the part related to the video content v5. (Right picture) The dotted line indicates the originally given alignment relationship, the red indicates the incorrect alignment relationship in the original alignment, and the green indicates the true alignment relationship. The solid line represents the result of realignment by the Dynamic Time Wraping algorithm, which also does not handle the noise correlation challenge well.

  • Coarse-grained NC (between Clip-Caption). Coarse-grained NC includes two categories: asynchronous (Asynchronous) and irrelevant (Irrelevant). The difference lies in whether the video clip or title can correspond to an existing title or video clip. "Asynchronous" refers to the timing misalignment between the video clip and the title, such as t1 in Figure 2. This results in a mismatch between the sequence of statements and actions, as the narrator explains before and after the actions are actually performed. "Irrelevant" refers to meaningless titles that cannot be aligned with the video clips (such as t2 and t6), or irrelevant video clips. According to relevant research by the Oxford Visual Geometry Group [5], only about 30% of the video clips and titles in the HowTo100M dataset are visually aligned, and only 15% are originally aligned;
  • Fine-grained NC (Frame-Word). For a video clip, only part of the text description may be relevant to it. In Figure 2, the title t5 "Sprinkle sugar on it" is strongly related to the visual content v5, but the action "Observe the glaze peeling off" is not related to the visual content. Irrelevant words or video frames may hinder the extraction of key information, affecting the alignment between segments and titles.

Method

This paper proposes a noise-robustTiming Optimal Transport (NOise Robust Temporal Optimal transport, Norton), through video-paragraph-level contrastive learning and fragment-title-level contrastive learning, learn video representations from multiple granularities in a post-fusion manner, significantly saving Training time overhead.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

# 图 3 Video -Paragraph Comparison Algorithm Figure.

1) Video - Paragraph Comparison. As shown in Figure 3, researchers use a fine-to-coarse strategy to perform multi-granularity association learning. First, the frame-word correlation is used to obtain the segment-title correlation, and further aggregation is used to obtain the video-paragraph correlation, and finally long-term correlation is captured through video-level contrastive learning. For the multi-granularity noise correlation challenge, the specific response is as follows:

  • Oriented to fine-grained NC. The researchers use log-sum-exp approximation as the soft-maximum operator to identify keywords and key frames in frame-word and word-frame alignment, realize important information extraction in a fine-grained interactive manner, and accumulate segment-title similarities. sex.
  • For coarse-grained asynchronous NC. The researchers used the optimal transmission distance as the distance metric between video clips and titles. Given a video clip-text title similarity matrix ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天, where ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the number of clips and titles, the optimal transmission goal is to maximize the overall alignment similarity, which can naturally handle timing asynchronous or one-to-many (such as t3 Corresponding to v4, v5) complex alignment situation.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Where ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 is uniform distribution giving equal weight to each segment and title, ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 is the transmission assignment or realignment moment, which can be solved by the Sinkhorn algorithm.
  • Oriented to coarse-grained irrelevant NC. Inspired by SuperGlue [6] in feature matching, we design an adaptive alignable hint bucket to try to filter irrelevant segments and titles. The prompt bucket is a vector of the same value in one row and one column, spliced ​​on the similarity matrix ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天, and its value represents the similarity threshold of whether it can be aligned. Tip Buckets integrate seamlessly into the Optimal Transport Sinkhorn solver.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

Measuring sequence distance through optimal transmission instead of directly modeling long videos can significantly reduce the amount of calculations. The final video-paragraph loss function is as follows, where ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the similarity matrix between the ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天th long video and the ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天th text paragraph.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

#2) Snippet - Title vs. . This loss ensures the accuracy of segment-to-title alignment in video-paragraph comparisons. Since self-supervised contrastive learning will mistakenly optimize semantically similar samples as negative samples, we use optimal transfer to identify and correct potential false negative samples:

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

whereICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the number of all video clips and titles in the training batch, the identity matrix ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the standard alignment target in the contrastive learning cross-entropy loss, ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 represents the realignment target after incorporating the optimal transmission correction target ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 , ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天 is the weight coefficient.

Experiment

This article aims to overcome the noise correlation to improve the model's accuracy on long videos. Comprehension. We verified it through specific tasks such as video retrieval, question and answer, and action segmentation. Some experimental results are as follows.

1) Long video retrieval

The goal of this task is a given text paragraph, Retrieve the corresponding long video. On the YouCookII data set, the researchers tested two scenarios: background retention and background removal, depending on whether to retain text-independent video clips. They use three similarity measurement criteria: Caption Average, DTW and OTAM. Caption Average matches an optimal video clip for each title in the text paragraph, and finally recalls the long video with the largest number of matches. DTW and OTAM accumulate the distance between video and text paragraphs in chronological order. The results are shown in Tables 1 and 2 below.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

                                                                                                                                                                                                                      Table 1, 2 Comparison of long video retrieval performance to be seen on the YouCookII data set

2) Noise correlation robustness analysis

Oxford Visual Geometry Group manually re-annotated the videos in HowTo100M and re-annotated each text title with the correct timestamp . The resulting HTM-Align dataset [5] contains 80 videos and 49K texts. Video retrieval on this data set mainly verifies whether the model overfits the noise correlation, and the results are shown in Table 9 below.

ICLR 2024 Oral:长视频中噪声关联学习,单卡训练仅需1天

## 9 9 Table 9 on the HTM-Align Data set analysis of noise association

Summary and Outlook

This article is about noise correlation learning [3][4] - data mismatch/error In-depth continuation of correlation, studying the multi-granularity noise correlation problem faced by multi-modal video-text pre-training, the proposed long video learning method can be extended to a wider range of video data with lower resource overhead.

Looking to the future, researchers can further explore the correlation between multiple modalities. For example, videos often contain visual, text and audio signals; they can try to combine external large language models (LLM) or multi-modal model (BLIP-2) to clean and reorganize text corpus; and explore the possibility of using noise as a positive stimulus for model training, rather than just suppressing the negative impact of noise.

References:
1. This site, “Yann LeCun : Generative models are not suitable for processing videos, AI has to make predictions in abstract space”, 2024-01-23.
2.Sun, Y., Xue , H., Song, R., Liu, B., Yang, H., & Fu, J. (2022). Long-form video-language pre-training with multimodal temporal contrastive learning. Advances in neural information processing systems, 35, 38032-38045.
3.Huang, Z., Niu, G., Liu, X., Ding, W., Xiao, X. , Wu, H., & Peng, X. (2021). Learning with noisy correspondence for cross-modal matching. Advances in Neural Information Processing Systems, 34, 29406-29419.
##4.Lin, Y., Yang, M., Yu, J., Hu, P., Zhang, C., & Peng, X. (2023). Graph matching with bi-level noisy correspondence . In Proceedings of the IEEE/CVF international conference on computer vision.
##5. Han, T., Xie, W., & Zisserman, A. ( 2022). Temporal alignment networks for long-term video. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2906-2916).
6.Sarlin, P. E., DeTone, D., Malisiewicz, T., & Rabinovich, A. (2020). Superglue: Learning feature matching with graph neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4938-4947).

The above is the detailed content of ICLR 2024 Oral: Noise correlation learning in long videos, single-card training only takes 1 day. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Two Point Museum: All Exhibits And Where To Find Them
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners DeepMind robot plays table tennis, and its forehand and backhand slip into the air, completely defeating human beginners Aug 09, 2024 pm 04:01 PM

But maybe he can’t defeat the old man in the park? The Paris Olympic Games are in full swing, and table tennis has attracted much attention. At the same time, robots have also made new breakthroughs in playing table tennis. Just now, DeepMind proposed the first learning robot agent that can reach the level of human amateur players in competitive table tennis. Paper address: https://arxiv.org/pdf/2408.03906 How good is the DeepMind robot at playing table tennis? Probably on par with human amateur players: both forehand and backhand: the opponent uses a variety of playing styles, and the robot can also withstand: receiving serves with different spins: However, the intensity of the game does not seem to be as intense as the old man in the park. For robots, table tennis

The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home The first mechanical claw! Yuanluobao appeared at the 2024 World Robot Conference and released the first chess robot that can enter the home Aug 21, 2024 pm 07:33 PM

On August 21, the 2024 World Robot Conference was grandly held in Beijing. SenseTime's home robot brand "Yuanluobot SenseRobot" has unveiled its entire family of products, and recently released the Yuanluobot AI chess-playing robot - Chess Professional Edition (hereinafter referred to as "Yuanluobot SenseRobot"), becoming the world's first A chess robot for the home. As the third chess-playing robot product of Yuanluobo, the new Guoxiang robot has undergone a large number of special technical upgrades and innovations in AI and engineering machinery. For the first time, it has realized the ability to pick up three-dimensional chess pieces through mechanical claws on a home robot, and perform human-machine Functions such as chess playing, everyone playing chess, notation review, etc.

Claude has become lazy too! Netizen: Learn to give yourself a holiday Claude has become lazy too! Netizen: Learn to give yourself a holiday Sep 02, 2024 pm 01:56 PM

The start of school is about to begin, and it’s not just the students who are about to start the new semester who should take care of themselves, but also the large AI models. Some time ago, Reddit was filled with netizens complaining that Claude was getting lazy. "Its level has dropped a lot, it often pauses, and even the output becomes very short. In the first week of release, it could translate a full 4-page document at once, but now it can't even output half a page!" https:// www.reddit.com/r/ClaudeAI/comments/1by8rw8/something_just_feels_wrong_with_claude_in_the/ in a post titled "Totally disappointed with Claude", full of

At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded At the World Robot Conference, this domestic robot carrying 'the hope of future elderly care' was surrounded Aug 22, 2024 pm 10:35 PM

At the World Robot Conference being held in Beijing, the display of humanoid robots has become the absolute focus of the scene. At the Stardust Intelligent booth, the AI ​​robot assistant S1 performed three major performances of dulcimer, martial arts, and calligraphy in one exhibition area, capable of both literary and martial arts. , attracted a large number of professional audiences and media. The elegant playing on the elastic strings allows the S1 to demonstrate fine operation and absolute control with speed, strength and precision. CCTV News conducted a special report on the imitation learning and intelligent control behind "Calligraphy". Company founder Lai Jie explained that behind the silky movements, the hardware side pursues the best force control and the most human-like body indicators (speed, load) etc.), but on the AI ​​side, the real movement data of people is collected, allowing the robot to become stronger when it encounters a strong situation and learn to evolve quickly. And agile

Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Li Feifei's team proposed ReKep to give robots spatial intelligence and integrate GPT-4o Sep 03, 2024 pm 05:18 PM

Deep integration of vision and robot learning. When two robot hands work together smoothly to fold clothes, pour tea, and pack shoes, coupled with the 1X humanoid robot NEO that has been making headlines recently, you may have a feeling: we seem to be entering the age of robots. In fact, these silky movements are the product of advanced robotic technology + exquisite frame design + multi-modal large models. We know that useful robots often require complex and exquisite interactions with the environment, and the environment can be represented as constraints in the spatial and temporal domains. For example, if you want a robot to pour tea, the robot first needs to grasp the handle of the teapot and keep it upright without spilling the tea, then move it smoothly until the mouth of the pot is aligned with the mouth of the cup, and then tilt the teapot at a certain angle. . this

ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award ACL 2024 Awards Announced: One of the Best Papers on Oracle Deciphering by HuaTech, GloVe Time Test Award Aug 15, 2024 pm 04:37 PM

At this ACL conference, contributors have gained a lot. The six-day ACL2024 is being held in Bangkok, Thailand. ACL is the top international conference in the field of computational linguistics and natural language processing. It is organized by the International Association for Computational Linguistics and is held annually. ACL has always ranked first in academic influence in the field of NLP, and it is also a CCF-A recommended conference. This year's ACL conference is the 62nd and has received more than 400 cutting-edge works in the field of NLP. Yesterday afternoon, the conference announced the best paper and other awards. This time, there are 7 Best Paper Awards (two unpublished), 1 Best Theme Paper Award, and 35 Outstanding Paper Awards. The conference also awarded 3 Resource Paper Awards (ResourceAward) and Social Impact Award (

Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Hongmeng Smart Travel S9 and full-scenario new product launch conference, a number of blockbuster new products were released together Aug 08, 2024 am 07:02 AM

This afternoon, Hongmeng Zhixing officially welcomed new brands and new cars. On August 6, Huawei held the Hongmeng Smart Xingxing S9 and Huawei full-scenario new product launch conference, bringing the panoramic smart flagship sedan Xiangjie S9, the new M7Pro and Huawei novaFlip, MatePad Pro 12.2 inches, the new MatePad Air, Huawei Bisheng With many new all-scenario smart products including the laser printer X1 series, FreeBuds6i, WATCHFIT3 and smart screen S5Pro, from smart travel, smart office to smart wear, Huawei continues to build a full-scenario smart ecosystem to bring consumers a smart experience of the Internet of Everything. Hongmeng Zhixing: In-depth empowerment to promote the upgrading of the smart car industry Huawei joins hands with Chinese automotive industry partners to provide

AI in use | Microsoft CEO's crazy Amway AI game tortured me thousands of times AI in use | Microsoft CEO's crazy Amway AI game tortured me thousands of times Aug 14, 2024 am 12:00 AM

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Oh my God, AI has really become a genius. Recently, it has become a hot topic that it is difficult to distinguish the authenticity of AI-generated pictures. (For details, please go to: AI in use | Become an AI beauty in three steps, and be beaten back to your original shape by AI in a second) In addition to the popular AI Google lady on the Internet, various FLUX generators have emerged on social platforms

See all articles