


CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com.
Paper title: SurMo: Surface-based 4D Motion Modeling for Dynamic Human Rendering Paper address : https://arxiv.org/pdf/2404.01225.pdf Project homepage: https://taohuumd.github.io/projects/SurMo Github link: https://github.com/TaoHuUMD/SurMo

Different from existing methods that model motion in sparse three-dimensional space, SurMo proposes Four-dimensional (XYZ-T) motion modeling based on the human body surface manifold field (or compact two-dimensional texture UV space), and through three planes (surface -based triplane) to represent motion. - Propose a motion physics decoder to predict the motion state of the next frame based on the current motion features (such as three-dimensional posture, speed, motion trajectory, etc.), such as the spatial partial derivative of motion—surface normal vector and time derivative-velocity to model the continuity of motion characteristics.
- Four-dimensional appearance decoding, decoding motion features in time series to render three-dimensional free-viewpoint video, mainly realized through hybrid volumetric-texture neural rendering (Hybrid Volumetric-Textural Rendering, HVTR [Hu et al. 2022]).
##This study explores the new viewpoint on the ZJU-MoCap data set Next, we studied the dynamic rendering effect of a time sequence (time-varying appearances), especially 2 sequences, as shown in the figure below. Each sequence contains similar gestures but appear in different motion trajectories, such as ①②, ③④, ⑤⑥. SurMo can model motion trajectories and therefore generate dynamic effects that change over time, while related methods generate results that only depend on posture, with the folds of clothes being almost the same under different trajectories.
SurMo at MPII-RRDC The data set explores motion-related shadows and clothing-affiliated movements, as shown in the figure below. The sequence was shot on an indoor soundstage, and the lighting conditions produced motion-related shadows on the performers due to self-occlusion issues.
SurMo can restore these shadows under new viewpoint rendering, such as ①②, ③④, ⑦⑧. The contrasting method HumanNeRF [Weng et al.] is unable to recover motion-related shadows. In addition, SurMo can reconstruct the motion of clothing accessories that changes with the motion trajectory, such as different folds in jumping movements ⑤⑥, while HumanNeRF cannot reconstruct this dynamic effect.
SurMo Also from Render the human body in fast-moving videos and recover the movement-related details of clothing folds that contrasting methods cannot render.
(1) Human body surface movement Modeling
This study compared two different motion modeling methods: the currently commonly used motion modeling in voxel space (Volumetric space), and the motion modeling proposed by SurMo In the motion modeling of the human body surface manifold field (Surface manifold), Volumetric triplane and Surface-based triplane are specifically compared, as shown in the figure below.
The above is the detailed content of CVPR 2024 | AI can also highly restore the flying skirt when dancing. Nanyang Polytechnic proposes a new paradigm for dynamic human body rendering. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

There are many ways to install DeepSeek, including: compile from source (for experienced developers) using precompiled packages (for Windows users) using Docker containers (for most convenient, no need to worry about compatibility) No matter which method you choose, Please read the official documents carefully and prepare them fully to avoid unnecessary trouble.

The rise of small models. Last month, Meta released the Llama3.1 series of models, which includes Meta’s largest model to date, the 405B model, and two smaller models with 70 billion and 8 billion parameters respectively. Llama3.1 is considered to usher in a new era of open source. However, although the new generation models are powerful in performance, they still require a large amount of computing resources when deployed. Therefore, another trend has emerged in the industry, which is to develop small language models (SLM) that perform well enough in many language tasks and are also very cheap to deploy. Recently, NVIDIA research has shown that structured weight pruning combined with knowledge distillation can gradually obtain smaller language models from an initially larger model. Turing Award Winner, Meta Chief A

DeepSeekAI Tool User Guide and FAQ DeepSeek is a powerful AI intelligent tool. This article will answer some common usage questions to help you get started quickly. FAQ: The difference between different access methods: There is no difference in function between web version, App version and API calls, and App is just a wrapper for web version. The local deployment uses a distillation model, which is slightly inferior to the full version of DeepSeek-R1, but the 32-bit model theoretically has 90% full version capability. What is a tavern? SillyTavern is a front-end interface that requires calling the AI model through API or Ollama. What is breaking limit

Recently, progress has been made for the first time on a mathematical puzzle that has remained unsolved for decades. Driving this progress are James Leng, a graduate student at UCLA, Ashwin Sah, a graduate student in mathematics at MIT, and Mehtaab Sawhney, an assistant professor at Columbia University. Among them, James Leng studied under the famous mathematician Terence Tao, and Ashwin Sah studied under the discrete mathematics master Zhao Yufei. Paper address: https://arxiv.org/pdf/2402.17995 To understand the breakthrough achieved in this research, we need to start with arithmetic progressions. The sum of the first n terms of an arithmetic sequence is called an arithmetic series, also known as an arithmetic series. In 1936, mathematician Paul Erdő

To register for LBank visit the official website and click "Register". Enter your email and password and verify your email. Download the LBank app iOS: Search "LBank" in the AppStore. Download and install the "LBank-DigitalAssetExchange" application. Android: Search for "LBank" in the Google Play Store. Download and install the "LBank-DigitalAssetExchange" application.

AI tools include: Doubao, ChatGPT, Gemini, BlenderBot, etc.

Since the release of Sora, the field of AI video generation has become more "busy". In the past few months, we have witnessed Jimeng, RunwayGen-3, LumaAI, and Kuaishou Keling taking turns to explode. Unlike in the past, where one could tell at a glance that the models were generated by AI, this batch of large video models may be the “best” we have ever seen. However, the amazing performance of video large language models (LLM) is inseparable from a large and finely annotated video data set, which requires a very high cost. Recently, a number of innovative methods have emerged in the research field that do not require additional training: using trained image large language models to directly process video tasks, thus bypassing the "expensive" training process. In addition, most existing video LLMs

The prototype concept of speculative sampling was proposed by MitchellStern et al. in 2018. This approach has since been further developed and refined by various works, including LookaheadDecoding, REST, Medusa, and EAGLE, where speculative sampling significantly speeds up the inference process of large language models (LLMs). An important question is: does speculative sampling in LLM harm the accuracy of the original model? Let me start with the answer: no. The standard speculative sampling algorithm is lossless, and this article will prove this through mathematical analysis and experiments. Mathematically, the speculative sampling formula can be defined as follows: where: ? is a real number sampled from a uniform distribution. is the next token to be predicted. ?(?) is given by the draft model
