Home > Technology peripherals > AI > body text

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance features

WBOY
Release: 2024-02-26 09:58:24
forward
1160 people have browsed it

After the AI ​​video model Sora became popular, major companies such as Meta and Google have stepped aside to do research and catch up with OpenAI.

Recently, researchers from the Google team proposed a universal video encoder-VideoPrism.

It can handle various video understanding tasks through a single frozen model.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Paper address: https://arxiv.org/pdf/2402.13217.pdf

For example, VideoPrism can convert the following The people blowing candles in the video are classified and located.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Video-text retrieval, according to the text content, the corresponding content in the video can be retrieved.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

For another example, describe the video below - a little girl is playing with building blocks.

You can also conduct QA questions and answers.

- What color are the blocks she placed above the green blocks?

- Purple.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

The researchers pre-trained VideoPrism on a heterogeneous corpus containing 36 million high-quality video subtitle pairs and 582 million video clips. , and with noisy parallel text (such as ASR transcribed text).

It is worth mentioning that VideoPrism refreshed 30 SOTA in 33 video understanding benchmark tests.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Universal Visual Encoder VideoPrism

Currently, the Video Fundamental Model (ViFM) has great potential to be used in huge Unlock new abilities within the corpus.

Although previous research has made great progress in general video understanding, building a true "basic video model" is still an elusive goal.

In response, Google launched VideoPrism, a general-purpose visual encoder designed to solve a wide range of video understanding tasks, including classification, localization, retrieval, subtitles, and question answering (QA).

VideoPrism is extensively evaluated on CV datasets, as well as CV tasks in scientific fields such as neuroscience and ecology.

Achieve state-of-the-art performance with minimal fitness by using a single frozen model.

In addition, Google researchers say that this frozen encoder setting follows previous research and takes into account its practical practicality, as well as the high cost of computation and fine-tuning the video model.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Design architecture, two-stage training method

The design concept behind VideoPrism is as follows.

Pre-training data is the basis of the basic model (FM). The ideal pre-training data for ViFM is a representative sample of all videos in the world.

In this sample, most videos do not have parallel text describing the content.

However, if trained on such text, it can provide invaluable semantic clues about the video space.

Therefore, Google’s pre-training strategy should focus primarily on the video mode while fully utilizing any available video-text pairs.

On the data side, Google researchers approximated this by assembling 36 million high-quality video subtitle pairs and 582 million video clips with noisy parallel text (such as ASR transcriptions, generated subtitles, and retrieved text). Required pre-training corpus.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featurespicture

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

In terms of modeling, the authors first comparatively learn semantic video embeddings from all video-text pairs of different qualities.

The masked video modeling described below is then improved by global and label refinement of the semantic embeddings using a wide range of pure video data.

Despite the success in natural language, masked data modeling remains challenging for CV due to the lack of semantics in the original visual signal.

Existing research addresses this challenge by borrowing indirect semantics (such as using CLIP to guide models or tokenizers, or implicit semantics) or implicitly generalizing them (such as labeling visual patches), which converts high masking rates into Combined with lightweight decoders.

Based on the above ideas, the Google team adopted a two-stage approach based on pre-training data.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

In the first stage, contrastive learning is performed to align the video encoder with the text encoder using all video-text pairs.

Based on previous research, the Google team minimized the similarity scores of all video-text pairs in the batch, performing symmetric cross-entropy loss minimization.

And use CoCa's image model to initialize the spatial coding module, and incorporate WebLI into pre-training.

Before calculating the loss, the video encoder features are aggregated through multi-head attention pooling (MAP).

This stage allows the video encoder to learn rich visual semantics from linguistic supervision, and the resulting model provides semantic video embeddings for the second stage training.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

In the second stage, the encoder continues to be trained and two improvements are made:

-The model needs to be based on the unmasked The input video patches of the code are used to predict the video-level global embedding and token embedding in the first stage

- The output token of the encoder is randomly shuffled before being passed to the decoder to avoid learning shortcuts.

Notably, the researchers’ pre-training leveraged two supervision signals: the textual description of the video, and contextual self-supervision, allowing VideoPrism to perform well on appearance- and action-centric tasks.

In fact, previous research shows that video captions mainly reveal appearance cues, while contextual supervision helps learn actions.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Experimental Results

Next, the researchers evaluated VideoPrism on a wide range of video-centric comprehension tasks, showing Its capabilities and versatility.

Mainly divided into the following four categories:

(1) Generally only video understanding, including classification and spatio-temporal positioning

(2) Zero-sample video text retrieval

(3) Zero-sample video subtitles and quality checking

(4) CV tasks in science

Classification and spatiotemporal localization

Table 2 shows freezing on VideoGLUE Backbone results.

VideoPrism significantly outperforms the baseline on all datasets. Furthermore, increasing VideoPrism’s underlying model size from ViT-B to ViT-g significantly improves performance.

It is worth noting that no baseline method achieves the second best result across all benchmarks, suggesting that previous methods may have been developed to target certain aspects of video understanding.

And VideoPrism continues to improve on this broad task.

This result shows that VideoPrism integrates various video signals into one encoder: semantics at multiple granularities, appearance and motion cues, spatiotemporal information, and the ability to interpret different video sources (such as online videos and scripted performances) of robustness.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Zero-shot video text retrieval and classification

Tables 3 and 4 summarize the results of video text retrieval and video classification respectively.

VideoPrism’s performance refreshes multiple benchmarks, and on challenging data sets, VideoPrism has achieved very significant improvements compared with previous technologies.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Most results for the base model VideoPrism-B actually outperform existing larger-scale models.

Furthermore, VideoPrism is comparable to or even better than the models in Table 4 pretrained using in-domain data and additional modalities (e.g., audio). These improvements in zero-shot retrieval and classification tasks reflect VideoPrism’s powerful generalization capabilities.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Zero-sample video subtitles and quality check

Table 5 and Table 6 show, respectively, zero-sample video subtitles and QA the result of.

Despite the simple model architecture and the small number of adapter parameters, the latest models are still competitive and, with the exception of VATEX, rank among the top methods for freezing visual and language models.

The results show that the VideoPrism encoder can generalize well to video-to-language generation tasks.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPictures

CV tasks in science

Generic ViFM uses a shared frozen encoder in all evaluations and its performance is comparable to that of specialized Comparable to domain-specific models for a single task.

In particular, VideoPrism often performs best and outperforms domain expert models with base scale models.

Scaling to large-scale models can further improve performance on all datasets. These results demonstrate that ViFM has the potential to significantly accelerate video analysis in different fields.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance features

Ablation Study

Figure 4 shows the ablation results. Notably, VideoPrism’s continued improvements on SSv2 demonstrate the effectiveness of data management and model design efforts in promoting motion understanding in video.

Although the comparative baselines already achieved competitive results on K400, the proposed global distillation and token shuffling further improve the accuracy.

Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance featuresPicture

Reference:

https://arxiv.org/pdf/2402.13217.pdf

https://blog.research.google/2024/02/videoprism-foundational-visual-encoder.html

The above is the detailed content of Google AI video is awesome again! VideoPrism, an all-in-one universal visual encoder, refreshes 30 SOTA performance features. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template