Thanks to the differentiable rendering provided by NeRF, recent 3D generative models have achieved stunning results on stationary objects. However, in a more complex and deformable category such as the human body, 3D generation still poses great challenges. This paper proposes an efficient combined NeRF representation of the human body, enabling high-resolution (512x256) 3D human body generation without the use of super-resolution models. EVA3D has significantly surpassed existing solutions on four large-scale human body data sets, and the code has been open source.
#Background
In order to solve this problem, this paper proposes an efficient combined 3D human body NeRF representation to achieve high-resolution (512x256) 3D human body GAN training and generation. The human NeRF representation proposed in this article and the three-dimensional human GAN training framework will be introduced below.
Efficient Human NeRF Representation
The human NeRF proposed in this article is based on the parametric human model SMPL, which provides convenient control of human posture and shape. When doing NeRF modeling, as shown in the figure below, this article divides the human body into 16 parts. Each part corresponds to a small NeRF network for local modeling. When rendering each part, this paper only needs to reason about the local NeRF. This sparse rendering method can also achieve native high-resolution rendering with lower computing resources.For example, when rendering a human body whose body and action parameters are inverse linear blend skinning), convert the sampling points in posed space into canonical space. Then it is calculated that the sampling points in the Canonical space belong to the bounding box of one or several local NeRFs, and then the NeRF model is inferred to obtain the color and density corresponding to each sampling point; when a certain sampling point falls into multiple local NeRFs In the overlapping area, each NeRF model will be inferred, and multiple results will be interpolated using the window function; finally, this information will be used for light integration to obtain the final rendering.
Based on the proposed efficient human NeRF expression, this article implements a three-dimensional human body GAN training framework. In each training iteration, this paper first samples an SMPL parameter and camera parameters from the data set, and randomly generates a Gaussian noise z. Using the human body NeRF proposed in this article, this article can render the sampled parameters into a two-dimensional human body picture as a fake sample. Using real samples in the data set, this article conducts adversarial training of GAN.
Two-dimensional human body data sets, such as DeepFashion, are usually It is prepared for two-dimensional vision tasks, so the posture diversity of the human body is very limited. To quantify the degree of imbalance, this paper counts the frequency of model face orientations in DeepFashion. As shown in the figure below, the orange line represents the distribution of face orientations in DeepFashion. It can be seen that it is extremely unbalanced, which makes it difficult to learn three-dimensional human body representation. To alleviate this problem, we propose a sampling method guided by human posture to flatten the distribution curve, as shown by the other colored lines in the figure below. This allows the model during training to see more diverse and larger angle images of the human body, thereby helping to learn three-dimensional human geometry. We conducted an experimental analysis of the sampling parameters. As can be seen from the table below, after adding the human posture guidance sampling method, although the image quality (FID) will be slightly reduced, the learned three-dimensional geometry (Depth) is significantly better.
The following figure shows some EVA3D generation results. EVA3D can randomly sample human body appearance, and can control rendering camera parameters and human postures. and body shape.
This paper conducts experiments on four large-scale human data sets, namely DeepFashion, SHHQ, UBCFashion, and AIST . This study compares the state-of-the-art static 3D object generation algorithm EG3D with StyleSDF. At the same time, the researchers also compared the algorithm ENARF-GAN specifically for 3D human generation. In the selection of indicators, this article takes into account the evaluation of rendering quality (FID/KID), the accuracy of human body control (PCK) and the quality of geometry generation (Depth). As shown in the figure below, this article significantly surpasses previous solutions in all data sets and all indicators.
Finally, this article also shows some application potential of EVA3D. First, the study tested differencing in the latent space. As shown in the figure below, this article is able to make smooth changes between two three-dimensional people, and the intermediate results maintain high quality. In addition, this article also conducted experiments on GAN inversion. The researchers used Pivotal Tuning Inversion, an algorithm commonly used in two-dimensional GAN inversion. As shown in the right figure below, this method can better restore the appearance of the reconstructed target, but a lot of details are lost in the geometric part. It can be seen that the inversion of three-dimensional GAN is still a very challenging task.
This paper proposes the first high-definition three-dimensional human NeRF generation algorithm EVA3D, and only needs It can be trained using 2D human body image data. EVA3D achieves state-of-the-art performance on multiple large-scale human datasets and shows potential for application on downstream tasks. The training and testing codes of EVA3D have been open sourced, and everyone is welcome to try it!
The above is the detailed content of Use 2D images to create a 3D human body. You can wear any clothes and change your movements.. For more information, please follow other related articles on the PHP Chinese website!