Home > Technology peripherals > AI > body text

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.

王林
Release: 2023-04-12 17:31:12
forward
1503 people have browsed it

Creating 3D content from given input (e.g., from text prompts, images, or 3D shapes) has important applications in the fields of computer vision and graphics. However, this problem is challenging. In reality, it usually requires professional artists (Technical Artists) to spend a lot of time and cost to create 3D content. At the same time, the resources in many online 3D model libraries are usually bare 3D models without any materials. If you want to apply them to the current rendering engine, you need a Technical Artist to create high-quality materials, lights and normal maps for them. . Therefore, it would be promising if there was a way to achieve automated, diverse, and realistic 3D model asset generation.

Therefore, research teams from South China University of Technology, Hong Kong Polytechnic University, Cross-dimensional Intelligence, Pengcheng Laboratory and other institutions have proposed a text-driven three-dimensional Model stylization method - TANGO, this method can automatically generate more realistic SVBRDF materials, normal maps and lights for a given 3D model and text, and has better robustness to low-quality 3D models. This study has been accepted into NeurIPS 2022.

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.

## Project homepage: https://cyw-3d.github.io/tango/

Model Effect

For a given text input and 3D model, TANGO can produce finer, photorealistic details without self-intersection on the surface of the 3D model. question. As shown in Figure 1 below, TANGO not only presents realistic reflection effects on smooth materials (such as gold, silver, etc.), but can also estimate point-by-point normals for uneven materials (such as bricks, etc.) Renders a bumpy effect.

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.

Figure 1. Stylized results of TANGO

TANGO can generate The key to real rendering results is to accurately separate each component (SVBRDF, normal map, light) in the shading model and learn them separately. Finally, these separated components are output through the spherical Gaussian differentiable renderer. , and sent to CLIP and input text to calculate loss. To demonstrate the rationale for decoupling components, the study visualized each component. Figure 2 (a) shows the stylized result of "a pair of shoes made of bricks", (b) shows the original normal direction of the 3D model, (c) is the normal direction predicted by TANGO for each point on the 3D model, (d) (e) (f) represent the diffuse reflection, roughness and specular reflection parameters in SVBRDF respectively, (g) is the ambient light expressed by the spherical Gaussian function predicted by TANGO.

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.

Figure 2 Visualization of decoupled rendering components

At the same time, the Research can also edit the results output by TANGO. For example, in Figure 3, the research can use other light maps to re-light the TANGO results; in Figure 4, the roughness and specular reflectivity parameters can be edited to change the degree of reflection on the object surface.

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.


Figure 3 Re-lighting the TANGO stylized result

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.

Figure 4 Editing the material of the object

In addition, because TANGO uses predicted normal maps to add object surface details, it is also very robust to three-dimensional models with a small number of vertices. As shown in Figure 5, the original lamp and alien models had 41160 and 68430 faces respectively. The researchers downsampled the original models and obtained a model with only 5000 faces. It can be seen that the performance of TANGO on the original model and the downsampled model is basically similar, while Text2Mesh has a serious self-intersection phenomenon on the low-quality model.

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.

Figure 5 Robustness Test

Principle and Method

TANGO mainly focuses on methods for text-guided stylization of three-dimensional objects. The most relevant current work in this area is Text2Mesh, which uses the pre-trained model CLIP as a guide to predict the color and position offset of surface vertices of a 3D model to achieve stylization. However, simply predicting surface vertex colors often produces unrealistic rendering effects, and irregular vertex offsets can cause severe self-intersections. Therefore, this research draws on the traditional physically based rendering pipeline to decouple the entire rendering process into the prediction process of SVBRDF materials, normal maps and lights, and express the decoupled elements with spherical Gaussian functions respectively. This physics-based decoupling method allows TANGO to correctly produce realistic rendering effects and has good robustness.

In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.

Figure 6 TANGO flow chart

Figure 6 shows the flow chart of TANGO work process. Given a 3D model and text (such as "a shoe made of gold" in the picture), the study first scales the 3D model to a unit sphere, and then samples the camera position near the 3D model. At this camera position Emit rays to find the intersection point with the three-dimensional model xp and the normal direction of the intersection point np. Next, xp and np will be sent to the SVBRDF network and Normal network to predict the material parameters and methods of the point. Line direction, and at the same time, multiple spherical Gaussian functions are used to express the lighting in the scene. For each training iteration, the study renders the image using a differentiable spherical Gaussian renderer, then encodes the augmented image using the CLIP model's image encoder, and finally the CLIP model backpropagates gradients to update all learnable parameters.

Summary

This paper proposes TANGO, a new method that generates realistic appearance styles for 3D models based on input text and is robust to low-quality models. By decoupling appearance style from SVBRDF, local geometric changes (pointwise normals) and lighting conditions, and representing and rendering these as spherical Gaussian functions, we can use CLIP as loss supervision and learn.

Compared with existing methods, TANGO can be very robust even for low-quality 3D models. However, the method of providing geometric details point-by-point normal while avoiding self-intersection will also slightly reduce the degree of concavity and convexity of the material surface that can be expressed. This study believes that TANGO and Text2Mesh based on vertex offset are performed in their respective directions. It is a good preliminary attempt and will inspire more follow-up research.


The above is the detailed content of In one sentence, the 3D model can generate a realistic appearance style, down to photo-level details.. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!