Home > Technology peripherals > AI > Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

王林
Release: 2023-04-08 21:21:07
forward
1053 people have browsed it

Introduction

Referring VOS (RVOS) is a newly emerging task. It aims to segment the objects referred to by the text from a video sequence based on the reference text. . Compared with semi-supervised video object segmentation, RVOS only relies on abstract language descriptions instead of pixel-level reference masks, providing a more convenient option for human-computer interaction and therefore has received widespread attention.

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

## Paper link: https://www.aaai.org/AAAI22Papers/AAAI-1100.LiD.pdf

The main purpose of this research is to solve two major challenges faced in existing RVOS tasks:

  • How to convert text information into , cross-modal fusion of picture information, so as to maintain the scale consistency between the two modalities and fully integrate the useful feature references provided by the text into the picture features;
  • How to abandon the two-stage strategy of existing methods (that is, first obtain rough results frame by frame at the picture level, then use the results as a reference, and obtain the final prediction through structural refinement of enhanced temporal information), and unify the entire RVOS task into a single-stage framework.

In this regard, this study proposes an end-to-end RVOS framework for cross-modal element migration - YOFO , its main contributions and innovations are:

  • Only one-stage reasoning is needed to directly obtain the segmentation of video targets using reference text information. As a result, the results obtained on two mainstream data sets - Ref-DAVIS2017 and Ref-Youtube-VOS surpassed all current two-stage methods;
  • proposed a meta-migration ( Meta-Transfer) module to enhance temporal information, thereby achieving more target-focused feature learning;
  • proposes a multi-scale cross-modal feature mining (Multi-Scale Cross-Modal Feature Mining) module, which can fully integrate useful features in language and pictures.
Implementation Strategy

The main process of the YOFO framework is as follows: input images and text first pass through the image encoder and language encoder to extract features, and then in multi-scale Cross-modal feature mining module for fusion. The fused bimodal features are simplified in the meta-transfer module that contains the memory library to eliminate redundant information in the language features. At the same time, temporal information can be preserved to enhance temporal correlation, and finally the segmentation results are obtained through a decoder.

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Figure 1: Main process of YOFO framework.

Multi-scale cross-modal feature mining module: This module passes the Fusion of two modal features of different scales can maintain the consistency between the scale information conveyed by the image features and the language features. More importantly, it ensures that the language information will not be diluted and overwhelmed by the multi-scale image information during the fusion process.

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Figure 2: Multi-scale cross-modal feature mining module.

##Meta Migration Module: A learning-to-learn strategy is adopted, and the process can be simply described as the following mapping function. The migration function is a convolution, then Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage is its convolution kernel parameter: Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

The optimization process can be expressed as the following objective function:

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Among them, M represents A memory bank that can store historical information. W represents the weight of different positions and can give different attention to different positions in the feature. Y represents the bimodal features of each video frame stored in the memory bank. This optimization process maximizes the ability of the meta-transfer function to reconstruct bimodal features, and also enables the entire framework to be trained end-to-end.

##Training and testing: The loss function used in training is lovasz loss, and the training set is two video data sets Ref-DAVIS2017 , Ref-Youtube-VOS, and use the static data set Ref-COCO to perform random affine transformation to simulate video data as auxiliary training. The meta-migration process is performed during training and prediction, and the entire network runs at a speed of 10FPS on 1080ti.

Experimental results

The method used in the study has achieved excellent results on two mainstream RVOS data sets (Ref-DAVIS2017 and Ref-Youtube-VOS). The quantitative indicators and some visualization renderings are as follows:

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

## Figure 3: Quantitative indicators on two mainstream data sets.

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Figure 4: Visualization on the VOS dataset.

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Figure 5: Other visualization effects of YOFO.

#The study also conducted a series of ablation experiments to illustrate the effectiveness of the feature mining module (FM) and the meta-transfer module (MT).

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Figure 6: Effectiveness of feature mining module (FM) and meta-transfer module (MT).

In addition, the study visualized the output features of the decoder with and without the MT module. It can be clearly seen that the MT module can correctly capture The content described in the language and filtering the interference noise.

Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage

Figure 7: Comparison of decoder output features before and after using the MT module. About the research team

This paper was jointly proposed by researchers from the Meitu Imaging Research Institute (MT Lab) and the Lu Huchuan team of Dalian University of Technology. Meitu Imaging Research Institute (MT Lab) is Meitu’s team dedicated to algorithm research, engineering development and productization in the fields of computer vision, machine learning, augmented reality, cloud computing and other fields. It provides the basis for Meitu’s existing and future products. It provides core algorithm support and promotes the development of Meitu products through cutting-edge technology. It is known as the "Technology Center of Meitu". It has participated in top international computer vision conferences such as CVPR, ICCV, and ECCV, and won more than ten championships and runner-ups.

The above is the detailed content of Based on cross-modal element transfer, the reference video object segmentation method of Meitu & Dalian University of Technology requires only a single stage. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template