The human action generation task aims to generate realistic human action sequences to meet the needs of entertainment, virtual reality, robotics and other fields. Traditional generation methods include steps such as 3D character creation, keyframe animation and motion capture, which have many limitations, such as being time-consuming, requiring professional technical knowledge, involving expensive systems and software, and possible compatibility between different software and hardware systems. Sexual issues etc. With the development of deep learning, people began to try to use generative models to achieve automatic generation of human action sequences, for example, by inputting text descriptions and requiring the model to generate action sequences that match the text requirements. As diffusion models are introduced into the field, the consistency of generated actions with given text continues to improve.
However, although the naturalness of the generated actions has been improved, there is still a big gap between it and the user needs. In order to further improve the capabilities of the human motion generation algorithm, this paper proposes the ReMoDiffuse algorithm (Figure 1) based on MotionDiffuse [1]. By utilizing retrieval strategies, we find highly relevant reference samples and provide fine-grained reference features to generate higher quality action sequences
- ## Paper link: https://arxiv.org/pdf/2304.01116.pdf
- GitHub link: https://github.com/mingyuan-zhang/ReMoDiffuse
- Project homepage: https://mingyuan-zhang.github.io/projects/ReMoDiffuse.html
By cleverly integrating diffusion models and innovative retrieval strategies , ReMoDiffuse breathes new life into text-guided human motion generation. With a carefully conceived model structure, ReMoDiffuse is not only able to create rich, diverse, and highly realistic action sequences, but can also effectively meet action requirements of various lengths and multi-granularity. Experiments prove that ReMoDiffuse performs well on multiple key indicators in the field of action generation, significantly surpassing existing algorithms.
Figure 1. Overview of ReMoDiffuse
Method introduction
The main process of ReMoDiffuse is divided into two stages: retrieval and diffusion. In the retrieval stage, ReMoDiffuse uses hybrid retrieval technology to retrieve information-rich samples from external multi-modal databases based on user input text and expected action sequence length, providing powerful guidance for action generation. In the diffusion stage, ReMoDiffuse uses the information obtained in the retrieval stage to generate a motion sequence that is semantically consistent with the user input through an efficient model structure.
In order to ensure efficient retrieval, ReMoDiffuse carefully designed the following data flow for the retrieval stage (Figure 2):
There are three types of data involved in the retrieval process, namely user input text, expected action sequence length, and an external multi-modal database containing multiple
pairs. When retrieving the most relevant samples, ReMoDiffuse uses the formula to calculate the similarity between the samples in each database and the user input. The first item here is to calculate the cosine similarity between the user input text and the text of the database entity using the text encoder of the pre-trained CLIP [2] model, and the second item calculates the difference between the expected action sequence length and the action sequence length of the database entity. The relative difference is taken as the kinematic similarity. After calculating the similarity score, ReMoDiffuse selects the top k samples with similar similarity as the retrieved samples, and extracts the text feature and action feature . These two, together with the features extracted from the text input by the user, are used as input signals to the diffusion stage to guide action generation.
Figure 2: Retrieval phase of ReMoDiffuseThe diffusion process (Figure 3.c) consists of two parts: the forward process and the reverse process. In the forward process, ReMoDiffuse gradually adds Gaussian noise to the original motion data and finally converts it into random noise. The inverse process focuses on removing noise and generating realistic motion samples. Starting from a random Gaussian noise, ReMoDiffuse uses a Semantic Modulation Module (SMT) (Figure 3.a) at each step of the inverse process to estimate the true distribution and gradually remove the noise based on the conditional signal. The SMA module in SMT here will integrate all condition information into the generated sequence features, which is the core module proposed in this article
Fig. 3: Diffusion stage of ReMoDiffuse
#For the SMA layer (Figure 3.b), we use the efficient attention mechanism (Efficient Attention) [3] to Accelerates the calculation of the attention module and creates a global feature map that emphasizes global information more. This feature map provides more comprehensive semantic clues for action sequences, thereby improving the performance of the model. The core goal of the SMA layer is to optimize the generation of action sequences by aggregating conditional information. Under this framework:
1. The Q vector specifically represents the expected action sequence that we expect to generate based on conditional information.
2.K vector as an indexing mechanism comprehensively considers multiple factors, including current action sequence features, semantic features input by the user, and features obtained from retrieval samplesand. Among them, represents the action sequence features obtained from the retrieval sample, and represents the text description feature obtained from the retrieval sample. This comprehensive construction method ensures the effectiveness of K vectors in the indexing process.
3.V vectors provide the actual features needed to generate actions. Similar to the K vector, the V vector takes into account the retrieval sample, user input, and the current action sequence. Since there is no direct correlation between the text description feature of the retrieved sample and the generated action, we choose not to use this feature when calculating the V vector to avoid unnecessary information interference
Combined with the global attention template mechanism of Efficient Attention, the SMA layer uses the auxiliary information from the retrieval sample, the semantic information of the user text, and the feature information of the sequence to be denoised to establish a series of comprehensive global templates, making all conditions The information can be fully absorbed by the sequence to be generated.
In order to rewrite the content, the original text needs to be converted into Chinese. Here’s what it looks like after rewriting:
Research Design and Experimental Results
We evaluated ReMoDiffuse on two datasets, HumanML3D [4] and KIT-ML [5]. The experimental results (Tables 1 and 2) demonstrate the powerful performance and advantages of our proposed ReMoDiffuse framework from the perspectives of text consistency and action quality
Table 1. Performance of different methods on the HumanML3D test set
##Table 2. Different Method performance on the KIT-ML test set
The following are some examples demonstrating the powerful performance of ReMoDiffuse (Figure 4). Compared to previous methods, for example, given the text "A person jumps in a circle," only ReMoDiffuse is able to accurately capture the "jumping" motion and the "circle" path. This shows that ReMoDiffuse is able to effectively capture text details and align the content with the given motion duration
Figure 4. Action sequence generated by ReMoDiffuse Comparison with action sequences generated by other methods
We compared the results of Guo et al.’s method [4], MotionDiffuse [1], MDM [6] and ReMoDiffuse. The generated corresponding action sequences were visually displayed, and the opinions of the test participants were collected in the form of a questionnaire. The distribution of results is shown in Figure 5. It can be clearly seen from the results that in most cases, the test participants believe that the action sequence generated by our method - that is, the action sequence generated by ReMoDiffuse is the most consistent with the given text description among the four algorithms, and is also the most natural and smooth.
Figure 5: Distribution of user survey results
Citation
Mingyuan Zhang, Cai Zhonggang, Pan Liang, Hong Fangzhou, Guo Xinying, Yang Lei, and Liu Ziwei. Motiondiffuse: Text-driven human motion generation based on diffusion models. arXiv preprint arXiv:2208.15001, 2022
[2] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
##[3] Zhuoran Shen, Mingyuan Zhang, Haiyu Zhao, Shuai Yi, and Hongsheng Li. Efficient attention: Attention with linear complexities. In Proceedings of the IEEE/CVF winter conference on applications of computer vision, pages 3531–3539, 2021.
[4 ] Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. Generating diverse and natural 3d human motions from text. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5152 –5161, 2022.
What needs to be rewritten is: [5] Matthias Plappert, Christian Mandery and Tamim Asfour. "Motor Language Dataset". Big Data, 4(4):236-252, 2016
[6] Guy Tevet, Sigal Raab, Brian Gordon, Yonatan Shafir, Daniel Cohen-Or, and Amit H Bermano . Human motion diffusion model. In The Eleventh International Conference on Learning Representations, 2022.
The above is the detailed content of ICCV 2023 | ReMoDiffuse, a new paradigm that reshapes human action generation and integrates diffusion models and retrieval strategies, is here. For more information, please follow other related articles on the PHP Chinese website!