Home Technology peripherals AI CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering

CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering

Apr 22, 2024 pm 02:37 PM
git project Dynamic human body reconstruction Nanyang Technological University

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com.

In daily activities, people's movements often cause secondary motion of clothes (secondary motion of clothes) and thus produce different folds of clothes, and This requires simultaneous dynamic modeling of the geometry, movement (human posture and velocity dynamics, etc.) and appearance of the human body and clothing. Since this process involves complex non-rigid physical interactions between people and clothes, traditional three-dimensional representation is often difficult to handle.

Learning dynamic digital human rendering from video sequences has made great progress in recent years. Existing methods often regard rendering as a neural mapping from human posture to image. The paradigm of "motion encoder-motion feature-appearance decoder" is adopted. This paradigm is based on image loss for supervision. It focuses too much on the reconstruction of each frame of image and lacks modeling of motion continuity. Therefore, it is difficult to effectively model complex motions such as "human body motion and clothing-affiliated motion".

In order to solve this problem, the S-Lab team from Nanyang Technological University in Singapore proposed a new paradigm of dynamic human reconstruction of motion-appearance joint learning, and A surface-based triplane is proposed, which unifies motion physics modeling and appearance modeling in one framework, opening up new ideas for improving the quality of dynamic human rendering. This new paradigm effectively models clothing-attached motion and can be used to learn dynamic human body reconstruction from fast-moving videos (such as dancing) and render motion-related shadows. The rendering efficiency is 9 times faster than the 3D voxel rendering method, and the LPIPS image quality is improved by about 19 percentage points.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

  • Paper title: SurMo: Surface-based 4D Motion Modeling for Dynamic Human Rendering
  • Paper address : https://arxiv.org/pdf/2404.01225.pdf
  • Project homepage: https://taohuumd.github.io/projects/SurMo
  • Github link: https://github.com/TaoHuUMD/SurMo
CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式
##Method Overview

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

In view of the shortcomings of the existing paradigm "Motion Encoder-Motion Features-Appearance Decoder" which only focuses on appearance reconstruction and ignores motion continuity modeling, a new paradigm SurMo is proposed: "①Motion Encoder-Motion Features—— ②Motion decoder, ③Appearance decoder". As shown in the figure above, the paradigm is divided into three stages:

  • Different from existing methods that model motion in sparse three-dimensional space, SurMo proposes Four-dimensional (XYZ-T) motion modeling based on the human body surface manifold field (or compact two-dimensional texture UV space), and through three planes (surface -based triplane) to represent motion.
  • Propose a motion physics decoder to predict the motion state of the next frame based on the current motion features (such as three-dimensional posture, speed, motion trajectory, etc.), such as the spatial partial derivative of motion—surface normal vector and time derivative-velocity to model the continuity of motion characteristics.
  • Four-dimensional appearance decoding, decoding motion features in time series to render three-dimensional free-viewpoint video, mainly realized through hybrid volumetric-texture neural rendering (Hybrid Volumetric-Textural Rendering, HVTR [Hu et al. 2022]).

SurMo can learn dynamic human rendering from videos based on end-to-end training based on reconstruction loss and adversarial loss .

Experimental results

This study is conducted in 3 data sets, with a total of 9 dynamics Experimental evaluations were conducted on human video sequences: ZJU-MoCap [Peng et al. 2021], AIST [Li, Yang et al. 2021] MPII-RRDC [Habermann et al. 2021] .

New viewpoint time series rendering

##This study explores the new viewpoint on the ZJU-MoCap data set Next, we studied the dynamic rendering effect of a time sequence (time-varying appearances), especially 2 sequences, as shown in the figure below. Each sequence contains similar gestures but appear in different motion trajectories, such as ①②, ③④, ⑤⑥. SurMo can model motion trajectories and therefore generate dynamic effects that change over time, while related methods generate results that only depend on posture, with the folds of clothes being almost the same under different trajectories.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

Rendering motion-related shadows and clothing attached motion

SurMo at MPII-RRDC The data set explores motion-related shadows and clothing-affiliated movements, as shown in the figure below. The sequence was shot on an indoor soundstage, and the lighting conditions produced motion-related shadows on the performers due to self-occlusion issues.


SurMo can restore these shadows under new viewpoint rendering, such as ①②, ③④, ⑦⑧. The contrasting method HumanNeRF [Weng et al.] is unable to recover motion-related shadows. In addition, SurMo can reconstruct the motion of clothing accessories that changes with the motion trajectory, such as different folds in jumping movements ⑤⑥, while HumanNeRF cannot reconstruct this dynamic effect.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

Rendering a fast moving human body

SurMo Also from Render the human body in fast-moving videos and recover the movement-related details of clothing folds that contrasting methods cannot render.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

Ablation experiment

(1) Human body surface movement Modeling

This study compared two different motion modeling methods: the currently commonly used motion modeling in voxel space (Volumetric space), and the motion modeling proposed by SurMo In the motion modeling of the human body surface manifold field (Surface manifold), Volumetric triplane and Surface-based triplane are specifically compared, as shown in the figure below.

It can be found that Volumetric triplane is a sparse expression, with only about 21-35% of the features used for rendering, while the Surface-based triplane feature utilization rate can reach 85%, so it has more advantages in handling self-occlusion. As shown in (d). At the same time, Surface-based triplane can achieve faster rendering by filtering points far away from the surface in voxel rendering, as shown in Figure (c).

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

At the same time, this study demonstrates that Surface-based triplane can converge faster than Volumetric triplane in the training process, and has obvious advantages in clothing fold details and self-occlusion, as shown in the figure above Show.

(2) Dynamic learning

SurMo studied the effect of motion modeling through ablation experiments, as shown below shown. The results show that SurMo can decouple the static characteristics of motion (such as fixed posture at a certain frame) and dynamic characteristics (such as speed). For example, when the speed is changed, the folds of close-fitting clothes remain unchanged, such as ①, while the folds of loose clothes are greatly affected by speed, such as ②, which is consistent with daily observations.

CVPR 2024 | 跳舞时飞扬的裙摆,AI也能高度还原了,南洋理工提出动态人体渲染新范式

The above is the detailed content of CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to install deepseek How to install deepseek Feb 19, 2025 pm 05:48 PM

There are many ways to install DeepSeek, including: compile from source (for experienced developers) using precompiled packages (for Windows users) using Docker containers (for most convenient, no need to worry about compatibility) No matter which method you choose, Please read the official documents carefully and prepare them fully to avoid unnecessary trouble.

Nvidia plays with pruning and distillation: halving the parameters of Llama 3.1 8B to achieve better performance with the same size Nvidia plays with pruning and distillation: halving the parameters of Llama 3.1 8B to achieve better performance with the same size Aug 16, 2024 pm 04:42 PM

The rise of small models. Last month, Meta released the Llama3.1 series of models, which includes Meta’s largest model to date, the 405B model, and two smaller models with 70 billion and 8 billion parameters respectively. Llama3.1 is considered to usher in a new era of open source. However, although the new generation models are powerful in performance, they still require a large amount of computing resources when deployed. Therefore, another trend has emerged in the industry, which is to develop small language models (SLM) that perform well enough in many language tasks and are also very cheap to deploy. Recently, NVIDIA research has shown that structured weight pruning combined with knowledge distillation can gradually obtain smaller language models from an initially larger model. Turing Award Winner, Meta Chief A

Summary of FAQs for DeepSeek usage Summary of FAQs for DeepSeek usage Feb 19, 2025 pm 03:45 PM

DeepSeekAI Tool User Guide and FAQ DeepSeek is a powerful AI intelligent tool. This article will answer some common usage questions to help you get started quickly. FAQ: The difference between different access methods: There is no difference in function between web version, App version and API calls, and App is just a wrapper for web version. The local deployment uses a distillation model, which is slightly inferior to the full version of DeepSeek-R1, but the 32-bit model theoretically has 90% full version capability. What is a tavern? SillyTavern is a front-end interface that requires calling the AI ​​model through API or Ollama. What is breaking limit

Progress was made for the first time in decades, apprentices Tao Zhexuan and Zhao Yufei broke through combinatorial mathematics problems Progress was made for the first time in decades, apprentices Tao Zhexuan and Zhao Yufei broke through combinatorial mathematics problems Aug 15, 2024 pm 05:04 PM

Recently, progress has been made for the first time on a mathematical puzzle that has remained unsolved for decades. Driving this progress are James Leng, a graduate student at UCLA, Ashwin Sah, a graduate student in mathematics at MIT, and Mehtaab Sawhney, an assistant professor at Columbia University. Among them, James Leng studied under the famous mathematician Terence Tao, and Ashwin Sah studied under the discrete mathematics master Zhao Yufei. Paper address: https://arxiv.org/pdf/2402.17995 To understand the breakthrough achieved in this research, we need to start with arithmetic progressions. The sum of the first n terms of an arithmetic sequence is called an arithmetic series, also known as an arithmetic series. In 1936, mathematician Paul Erdő

How to register for LBank Exchange? How to register for LBank Exchange? Aug 21, 2024 pm 02:20 PM

To register for LBank visit the official website and click "Register". Enter your email and password and verify your email. Download the LBank app iOS: Search "LBank" in the AppStore. Download and install the "LBank-DigitalAssetExchange" application. Android: Search for "LBank" in the Google Play Store. Download and install the "LBank-DigitalAssetExchange" application.

What are the AI ​​tools? What are the AI ​​tools? Nov 29, 2024 am 11:11 AM

AI tools include: Doubao, ChatGPT, Gemini, BlenderBot, etc.

Add fast and slow eyes to the video model, Apple's new training-free method surpasses everything SOTA in seconds Add fast and slow eyes to the video model, Apple's new training-free method surpasses everything SOTA in seconds Aug 11, 2024 pm 04:02 PM

Since the release of Sora, the field of AI video generation has become more "busy". In the past few months, we have witnessed Jimeng, RunwayGen-3, LumaAI, and Kuaishou Keling taking turns to explode. Unlike in the past, where one could tell at a glance that the models were generated by AI, this batch of large video models may be the “best” we have ever seen. However, the amazing performance of video large language models (LLM) is inseparable from a large and finely annotated video data set, which requires a very high cost. Recently, a number of innovative methods have emerged in the research field that do not require additional training: using trained image large language models to directly process video tasks, thus bypassing the "expensive" training process. In addition, most existing video LLMs

Will speculative sampling lose the inference accuracy of large language models? Will speculative sampling lose the inference accuracy of large language models? Aug 09, 2024 pm 01:09 PM

The prototype concept of speculative sampling was proposed by MitchellStern et al. in 2018. This approach has since been further developed and refined by various works, including LookaheadDecoding, REST, Medusa, and EAGLE, where speculative sampling significantly speeds up the inference process of large language models (LLMs). An important question is: does speculative sampling in LLM harm the accuracy of the original model? Let me start with the answer: no. The standard speculative sampling algorithm is lossless, and this article will prove this through mathematical analysis and experiments. Mathematically, the speculative sampling formula can be defined as follows: where: ? is a real number sampled from a uniform distribution. is the next token to be predicted. ?(?) is given by the draft model

See all articles