Stability AI, the company behind Stable Diffusion, has launched something new.
What this time brings is new progress in Tusheng 3D:
Stable Video 3D (SV3D) based on Stable Video Diffusion can generate high-quality 3D networks with only one picture grid.
Stable Video Diffusion (SVD) is a model previously released by Stability AI for generating high-resolution videos. The advent of SV3D marks the first time that the video diffusion model has been successfully applied to the field of 3D generation.
Officially stated that based on this, SV3D has greatly improved the quality and view consistency of 3D generation.
The model weights are still open source, but they can only be used for non-commercial purposes. If you want to use them commercially, you have to buy a Stability AI membership~
Not much to say , let’s take a look at the details of the paper.
Introducing the latent video diffusion model, the core purpose of SV3D is to use the temporal consistency of the video model to improve the consistency of 3D generation.
And the video data itself is easier to obtain than 3D data.
Stability AI provides two versions of SV3D this time:
The researchers also improved the 3D optimization technology: using a coarse-to-fine training strategy to optimize NeRF and DMTet meshes to generate 3D objects.
They also designed a special loss function called Masked Score Distillation Sampling (SDS) to improve performance by optimizing areas that are not directly visible in the training data. Quality and consistency of generated 3D models.
At the same time, SV3D introduces a lighting model based on spherical Gaussian to separate lighting effects and textures, effectively reducing built-in lighting problems while maintaining texture clarity.
Specifically in terms of architecture, SV3D contains the following key components:
The camera's motion trajectory information and the time information of the diffusion noise will be input into the residual module together and converted into sinusoidal position embeddings. Then these embedded information will be integrated and linearly transformed, and added to the noise The time step is embedded in.
This design aims to improve the model's ability to process images by finely controlling camera trajectories and noise input.
In addition, SV3D uses CFG (classifier-free guidance) during the generation process to control the sharpness of the generation, especially when generating the last few frames of the track, using triangles CFG scaling to avoid over-sharpening.
The researchers trained SV3D on the Objaverse dataset, with an image resolution of 575×576 and a field of view of 33.8 degrees. The paper reveals that all three models (SV3D_u, SV3D_c, SV3D_p) were trained on 4 nodes for about 6 days, each node equipped with 8 80GB A100 GPUs.
In terms of new perspective synthesis (NVS) and 3D reconstruction, SV3D surpasses other existing methods and reaches SOTA.
#From the results of qualitative comparison, the multi-view view generated by SV3D has richer details and is closer to the original input image. In other words, SV3D can more accurately capture details and maintain consistency when viewing angle changes in understanding and reconstructing the 3D structure of objects.
Such results have aroused the emotion of many netizens:
It is conceivable that in the next 6-12 months, 3D generation Technology will be used in gaming and video projects.
There are always some bold ideas in the comment area...
And the project is open source, the first wave of friends have already started playing it, and they can run it on 4090.
Reference link:
[1]https://twitter.com/StabilityAI/status/1769817136799855098.
[2]https://stability.ai/news/introducing-stable-video-3d.
[3]https://sv3d.github.io/index.html.
The above is the detailed content of Stability AI open source new release: 3D generation introduces video diffusion model, quality consistency up, 4090 playable. For more information, please follow other related articles on the PHP Chinese website!