Home > Technology peripherals > AI > body text

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

王林
Release: 2024-07-12 09:30:21
Original
996 people have browsed it
Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D
The AIxiv column is a column where this site publishes academic and technical content. In the past few years, the AIxiv column of this site has received more than 2,000 reports, covering top laboratories from major universities and companies around the world, effectively promoting academic exchanges and dissemination. If you have excellent work that you want to share, please feel free to contribute or contact us for reporting. Submission email: liyazhou@jiqizhixin.com; zhaoyunfeng@jiqizhixin.com

The author of this article, Dr. Pan Liang, is currently a Research Scientist at the Shanghai Artificial Intelligence Laboratory. Previously, from 2020 to 2023, he served as a Research Fellow at the S-Lab of Nanyang Technological University in Singapore, and his advisor was Professor Liu Ziwei. His research focuses on computer vision, 3D point clouds and virtual humans, and he has published multiple papers in top conferences and journals, with more than 2700 Google Scholar citations. In addition, he has served as a reviewer for top conferences and journals in the fields of computer vision and machine learning.

Recently, SenseTime-Nanyang Technological University Joint AI Research Center S-Lab, Shanghai Artificial Intelligence Laboratory, Peking University and the University of Michigan jointly proposed DreamGaussian4D (DG4D), which combines explicit modeling of spatial transformation with static 3D Gaussian Splatting (GS) technology enables efficient four-dimensional content generation.

Four-dimensional content generation has made significant progress recently, but existing methods have problems such as long optimization time, poor motion control capabilities, and low detail quality. DG4D proposes an overall framework containing two main modules: 1) Image to 4D GS - we first use DreamGaussianHD to generate static 3D GS, and then generate dynamic generation based on Gaussian deformation based on HexPlane; 2) Video to video texture refinement - we The resulting UV spatial texture map is refined and its temporal consistency is enhanced by using a pretrained image-to-video diffusion model.

It is worth noting that DG4D reduces the optimization time of four-dimensional content generation from hours to minutes (as shown in Figure 1), allows visual control of the generated three-dimensional motion, and supports the generation of images that can be realistically rendered in a three-dimensional engine. Animated mesh model.

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

  • Paper name: DreamGaussian4D: Generative 4D Gaussian Splatting

  • Homepage address: https://jiawei-ren.github.io/projects/dreamgaussian4d/

  • Paper address: https:// arxiv.org/abs/2312.17142

  • Demo address: https://huggingface.co/spaces/jiawei011/dreamgaussian4d

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

                                                                                                                                                                                                                                        Figure 1. DG4D can realize four-dimensional content in four and a half minutes Optimizing Basic Convergence

Issues and Challenges

Generative models can greatly simplify the production and production of diverse digital content such as 2D images, videos, and 3D scenes, and have made significant progress in recent years. Four-dimensional content is an important content form for many downstream tasks such as games, movies, and television. Four-dimensional generated content should also support the import of traditional graphics rendering engine software (such as Blender or Unreal Engine) to connect to the existing graphics content production pipeline (see Figure 2).

Although there are some studies dedicated to dynamic three-dimensional (i.e., four-dimensional) generation, there are still challenges in the efficient and high-quality generation of four-dimensional scenes. In recent years, more and more research methods have been used to achieve four-dimensional content generation by combining video and three-dimensional generation models to constrain the consistency of content appearance and actions under any viewing angle.

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

                                                                                                                                                                             NeRF) said. For example, MAV3D [1] achieves text-to-four-dimensional content generation by refining the text-to-video diffusion model on HexPlane [2]. Consistent4D [3] introduces a video-to-4D framework to optimize cascaded DyNeRF to generate 4D scenes from statically captured videos. With multiple diffusion model priors, Animate124 [4] is able to animate a single unprocessed 2D image into a 3D dynamic video via textual motion description. Based on hybrid SDS [5] technology, 4D-fy [6] enables the generation of engaging text-to-four-dimensional content using multiple pre-trained diffusion models.

However, all the above-mentioned existing methods [1,3,4,6] require several hours to generate a single 4D NeRF, which greatly limits their application potential. Furthermore, they all have difficulty in effectively controlling or selecting the final generated motion. The above shortcomings mainly come from the following factors: First, the underlying implicit four-dimensional representation of the aforementioned method is not efficient enough, and there are problems such as slow rendering speed and poor motion regularity; secondly, the random nature of video SDS increases the difficulty of convergence, and in the final results Introduces instability and multiple artifacts.

Method introduction

Different from methods that directly optimize 4D NeRF, DG4D builds an efficient and powerful representation for 4D content generation by combining static Gaussian splashing technology and explicit spatial transformation modeling. Furthermore, video generation methods have the potential to provide valuable spatiotemporal priors that enhance high-quality 4D generation. Specifically, we propose an overall framework consisting of two main stages: 1) image to 4D GS generation; 2) video large model-based texture map refinement. D1. The generation of image to 4D GS

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

Figure 3 picture to the 4D GS generation framework diagram

In this stage, we use static 3D GS and its spatial deformation to indicate dynamic dynamics four-dimensional scene. Based on a given 2D image, we use the enhanced DreamGaussianHD method to generate static 3D GS. Subsequently, by optimizing the time-dependent deformation field on the static 3D GS function, the Gaussian deformation at each timestamp is estimated, aiming to make the shape and texture of each deformed frame consistent with the corresponding frame in the driving video. At the end of this stage, a dynamic three-dimensional mesh model sequence will be generated.

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

                                                                                                                                                                         HD

Based on the recent graphical 3D object method DreamGaussian [7] using 3D GS, we have made some further improvements , and compiled a set of better 3D GS generation and initialization methods. The main improved operations include 1) adopting a multi-view optimization method; 2) setting the background of the rendered image during the optimization process to a black background that is more suitable for generation. We call the improved version DreamGaussianHD, and the specific improvement renderings can be seen in Figure 4. Figure 5 HexPlane represents the dynamic deformation field Based on the generated static 3D GS model, we generate videos that meet the expectations by predicting the deformation of the Gaussian kernel in each frame Dynamic 4D GS model. In terms of characterization of dynamic effects, we choose HexPlane (shown in Figure 5) to predict the Gaussian kernel displacement, rotation and scale at each timestamp, thereby driving the generation of a dynamic model for each frame. In addition, we also adjusted the design network in a targeted manner, especially the design of residual connections and zero initialization for the last few linear operation network layers, so that the dynamic field can be smoothly and fully initialized based on the static 3D GS model (the effect is as shown in the figure) shown in 6).始 Figure 6 The impact of the initialization of the dynamic formation on the final generation of the dynamic field

    2. Water optimization of video to video

Figure 7 Video to video texture optimization Frame diagramGenerate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

Similar to DreamGaussian, after the first stage of four-dimensional dynamic model generation based on 4D GS, the four-dimensional mesh model sequence can be extracted. Moreover, we can also further optimize the texture in the UV space of the mesh model, similar to what DreamGaussian does. Unlike DreamGaussian, which only uses image generation models to optimize textures for individual 3D mesh models, we need to optimize the entire 3D mesh sequence.

Moreover, we found that if we follow the approach of DreamGaussian, that is, perform independent texture optimization for each 3D mesh sequence, the texture of the 3D mesh will be generated inconsistently at different timestamps, and there will often be flickering, etc. Defect artifacts appear. In view of this, we are different from DreamGaussian and propose a video-to-video texture optimization method in UV space based on a large video generation model. Specifically, we randomly generated a series of camera trajectories during the optimization process, and rendered multiple videos based on this, and performed corresponding noise addition and denoising on the rendered videos to achieve the generation of mesh model sequences. texture enhancement.

The comparison of the texture optimization effects of generating a large model based on pictures and generating a large model based on videos is shown in Figure 8.

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

                                                              

Experimental results

Compared with the previous method of overall optimization of 4D NeRF, DG4D has significantly reduced The time required to generate four-dimensional content is reduced. The specific time comparison can be seen in Table 1.

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

                                                                                                                                                                                                                                                                       . Consistency reporting In Table 2.

                                                                                                                                                                                                                                                                         Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

For the setting of generating four-dimensional content based on video, the comparison of numerical results of the method of generating four-dimensional content from video can be seen in Table 3.

                                                                                                                                                                                                                  Table 3 Comparison of numerical results of four-dimensional content-related methods based on video generation

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

In addition, we have also done a user test on the generation results of various methods that best match our method sampling Test, the test results are reported in Table 4.生 Table 4 User test based on the four -dimensional content generated by a single picture

DG4D and the existing open source SOTA graph generates the effect of the four -dimensional content method and video generating four -dimensional content methods, which are displayed in FIG. 9 and Figure 10 .内容 Figure 9 Figure 9 Shengsheng four -dimensional content effect comparison Figure

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

Video Sheng four -dimensional content effect comparison Figure

In addition, we also generated static 3D content based on the recent direct feedforward method of generating 3D GS from a single image (that is, not using the SDS optimization method), and initialized the generation of dynamic 4D GS based on this. Direct feedforward generation of 3D GS can produce higher quality and more diverse 3D content faster than methods based on SDS optimization. The four-dimensional content obtained based on this is shown in Figure 11.生 Figure 11 The four -dimensional dynamic content generated based on the method of generating 3D GS

Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

More four -dimensional content display based on a single picture is displayed in Figure 12.

Conclusion

Based on 4D GS, we propose DreamGaussian4D (DG4D), an efficient image-to-4D generation framework. Compared to existing four-dimensional content generation frameworks, DG4D significantly reduces optimization time from hours to minutes. Furthermore, we demonstrate the use of generated videos for driven motion generation, achieving visually controllable 3D motion generation. Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D

Finally, DG4D allows for 3D mesh model extraction and supports high-quality texture optimization that is temporally consistent. We hope that the four-dimensional content generation framework proposed by DG4D will promote research work in the direction of four-dimensional content generation and contribute to diverse practical applications.

References

[1] Singer et al. "Text-to-4D dynamic scene generation." Proceedings of the 40th International Conference on Machine Learning. 2023.

[ 2] Cao et al. "Hexplane: A fast representation for dynamic scenes." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.

[3] Jiang et al. "Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video." The Twelfth International Conference on Learning Representations. 2023.

[4] Zhao et al. "Animate124: Animating one image to 4d dynamic scene." arXiv preprint arXiv:2311.14603 (2023).

[5] Poole et al. "DreamFusion: Text-to-3D using 2D Diffusion." The Eleventh International Conference on Learning Representations. 2022.

[6] Bahmani , Sherwin, et al. "4d-fy: Text-to-4d generation using hybrid score distillation sampling." arXiv preprint arXiv:2311.17984 (2023).

[7] Tang et al. "DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation." The Twelfth International Conference on Learning Representations. 2023.

The above is the detailed content of Generate four-dimensional content in a few minutes and control motion effects: Peking University and Michigan propose DG4D. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:jiqizhixin.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!