Home > Technology peripherals > AI > Advanced driving simulation: Driving scene reconstruction with realistic surround data

Advanced driving simulation: Driving scene reconstruction with realistic surround data

WBOY
Release: 2024-01-01 12:58:13
forward
1380 people have browsed it

Original title: DrivingGaussian: Composite Gaussian point rendering for ambient dynamic autonomous driving scenes

Please click here to view the paper: https://arxiv.org/pdf/2312.07920.pdf

Code link: https://pkuvdig.github.io/DrivingGaussian/

Author affiliation: Peking University Google Research University of California, Merced

Thesis idea:

This article proposes DrivingGaussian, a highly efficient and cost-effective framework for dynamic autonomous driving scenarios. For complex scenes with moving objects, this paper first uses incremental static 3D Gaussians to sequentially and progressively model the static background of the entire scene. Then, this paper uses a composite dynamic Gaussian graph to process multiple moving objects, reconstruct each object individually and restore their accurate position and occlusion relationship in the scene. This paper further uses LiDAR priors for Gaussian Splatting to reconstruct the scene with more details and maintain panoramic consistency. DrivingGaussian outperforms existing methods in driving scene reconstruction and enables realistic surround-view synthesis with high fidelity and multi-camera consistency.

Main contribution:

According to the understanding of this article, DrivingGaussian is the first framework to use composite Gaussian splash technology for large-scale dynamic driving scene representation and modeling

Introduced two novel modules, including incremental static 3D Gaussian and composite dynamic Gaussian map. The former incrementally reconstructs a static background, while the latter models multiple dynamic objects using Gaussian maps. Aided by lidar priors, the method helps recover complete geometries in large-scale driving scenarios

Comprehensive experiments demonstrate that DrivingGaussian outperforms previous approaches on challenging autonomous driving benchmarks method, and is able to simulate various extreme situations for downstream tasks

Network design:

This article introduces a new framework called DrivingGaussian for representing look-around Dynamic autonomous driving scenarios. The key idea of ​​this framework is to hierarchically model complex driving scenarios using sequential data from multiple sensors. By using Composite Gaussian Splatting technology, the entire scene is decomposed into static backgrounds and dynamic objects, and each part is reconstructed separately. Specifically, a synthetic scene is first constructed sequentially from look-around multi-camera views using an incremental static 3D Gaussian method. Then, a composite dynamic Gaussian map is employed to reconstruct each moving object individually and dynamically integrate them into the static background through the Gaussian map. On this basis, global rendering is performed through Gaussian Splatting to capture occlusion relationships in the real world, including static backgrounds and dynamic objects. In addition, this paper also introduces a LiDAR prior into the GS representation, which is able to recover more accurate geometries and maintain better multi-view consistency compared to point clouds generated using random initialization or SfM

Extensive experiments show that our method achieves state-of-the-art performance on public autonomous driving datasets. Even without lidar beforehand, our method still shows good performance, demonstrating its versatility in reconstructing large-scale dynamic scenes. In addition, the framework of this article supports dynamic scene construction and corner case simulation, which helps to verify the safety and robustness of the autonomous driving system.

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Figure 1. DrivingGaussian achieves realistic rendering performance for surround-view dynamic autonomous driving scenes. Naive methods [13, 49] either produce unpleasant artifacts and blurring in large-scale backgrounds or have difficulty reconstructing dynamic objects and detailed scene geometry. DrivingGaussian first introduced Composite Gaussian Splatting to effectively represent static backgrounds and multiple dynamic objects in complex surround-view driving scenes. DrivingGaussian enables high-quality synthesis of surround views across multiple cameras and facilitates long-term dynamic scene reconstruction.

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Figure 2. The overall process of this article’s method. Left: DrivingGaussian acquires continuous data from multiple sensors, including multi-camera images and LiDAR. Center: To represent large-scale dynamic driving scenarios, this paper proposes Composite Gaussian Splatting, which consists of two parts. The first part incrementally reconstructs a broad static background, while the second part uses Gaussian maps to construct multiple dynamic objects and dynamically integrate them into the scene. Right: DrivingGaussian demonstrates good performance across multiple tasks and application scenarios.

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Figure 3. Composite Gaussian Splatting with incremental static 3D Gaussian and dynamic Gaussian plots. This article uses Composite Gaussian Splatting to decompose the entire scene into static backgrounds and dynamic objects, reconstruct each part separately and integrate them for global rendering.

Experimental results:

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Summary:

This article introduces DrivingGaussian, A novel framework to represent large-scale dynamic autonomous driving scenarios based on the proposed Composite Gaussian Splatting. DrivingGaussian progressively models a static background using incremental static 3D Gaussians and captures multiple moving objects using composite dynamic Gaussian maps. This paper further exploits LiDAR priors to achieve accurate geometry and multi-view consistency. DrivingGaussian achieves state-of-the-art performance on two autonomous driving datasets, enabling high-quality surround view synthesis and dynamic scene reconstruction

Citation:

Zhou, X., Lin, Z., Shan, X., Wang, Y., Sun, D., & Yang, M. (2023). DrivingGaussian: Composite Gaussian Splatting for Surrounding Dynamic Autonomous Driving Scenes. ArXiv. /abs/2312.07920

Advanced driving simulation: Driving scene reconstruction with realistic surround data

Original link:

https://www.php.cn/link/a878dbebc902328b41dbf02aa87abb58

The above is the detailed content of Advanced driving simulation: Driving scene reconstruction with realistic surround data. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template