Table of Contents
LidaRF overall framework overview
#1) Hybrid representation of lidar encoding
Home Technology peripherals AI LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\'24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\'24)

May 09, 2024 pm 01:31 PM
data radar

Light-realistic simulation plays a key role in applications such as autonomous driving, where advances in neural network radiated fields (NeRFs) may enable better scalability by automatically creating digital 3D assets. However, the reconstruction quality of street scenes suffers due to the high collinearity of camera motion on the streets and sparse sampling at high speeds. On the other hand, the application often requires rendering from a camera perspective that deviates from the input perspective to accurately simulate behaviors such as lane changes. LidaRF presents several insights that allow better utilization of lidar data to improve the quality of NeRF in street views. First, the framework learns geometric scene representations from LiDAR data, which are combined with an implicit mesh-based decoder to provide stronger geometric information provided by the displayed point cloud. Secondly, a robust occlusion-aware depth supervised training strategy is proposed, allowing to improve the NeRF reconstruction quality in street scenes by accumulating strong information using dense LiDAR point clouds. Third, enhanced training perspectives are generated based on the intensity of lidar points to further improve upon the significant improvements obtained in new perspective synthesis under real driving scenarios. In this way, with a more accurate geometric scene representation learned by the framework from lidar data, the method can be improved in one step and obtain better significant improvements in real driving scenarios.

The contribution of LidaRF is mainly reflected in three aspects:

(i) Mixing lidar encoding and grid features to enhance scene representation. While lidar has been used as a natural depth monitoring source, incorporating lidar into NeRF inputs offers great potential for geometric induction, but is not straightforward to implement. To this end, a grid-based representation is borrowed, but features learned from point clouds are fused into the grid to inherit the advantages of explicit point cloud representations. Through the successful launch of the 3D sensing framework, 3D sparse convolutional networks are utilized as an effective and efficient structure to extract geometric features from the local and global context of lidar point clouds.

(ii) Robust occlusion-aware depth supervision. Similar to existing work, lidar is also used here as a source of deep supervision, but in greater depth. Since the sparsity of lidar points limits its effectiveness, especially in low-texture areas, denser depth maps are generated by aggregating lidar points across neighboring frames. However, the depth map thus obtained does not take occlusions into account, resulting in erroneous depth supervision. Therefore, a robust depth supervision scheme is proposed, borrowing the method of class learning - gradually supervising the depth from the near field to the far field, and gradually filtering out the wrong depth during the NeRF training process, so as to more effectively extract the depth from the lidar. Learn depth.

(iii) Lidar-based view enhancement. Furthermore, given the view sparsity and limited coverage in driving scenarios, lidar is utilized to densify the training views. That is, the accumulated lidar points are projected into new training views; note that these views may deviate somewhat from the driving trajectory. These views projected from lidar are added to the training dataset, and they do not account for occlusion issues. However, we apply the previously mentioned supervision scheme to solve the occlusion problem, thus improving the performance. Although our method is also applicable to general scenes, in this work we focus more on the evaluation of street scenes and achieve significant improvements compared to existing techniques, both quantitatively and qualitatively.

LidaRF has also shown advantages in interesting applications that require greater deviation from the input view, significantly improving the quality of NeRF in challenging street scene applications.

LidaRF overall framework overview

LidaRF is a method for inputting and outputting corresponding densities and colors. It uses UNet to combine Huff coding and laser Radar encoding. Furthermore, enhanced training data are generated via lidar projections to train geometric predictions using the proposed robust deep supervision scheme.

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

#1) Hybrid representation of lidar encoding

Lidar point clouds have strong geometric guidance potential, which is important for NeRF (Neural Rendering Field) is extremely valuable. However, relying solely on lidar features for scene representation results in low-resolution rendering due to the sparse nature of lidar points (despite temporal accumulation). Additionally, because lidar has a limited field of view, for example it cannot capture building surfaces above a certain height, blank renderings occur in these areas. In contrast, our framework fuses lidar features and high-resolution spatial grid features to exploit the advantages of both and learn together to achieve high-quality and complete scene rendering.

Lidar feature extraction. The geometric feature extraction process for each lidar point is described in detail here. Referring to Figure 2, the lidar point clouds of all frames of the entire sequence are first aggregated to build a denser point cloud collection. The point cloud is then voxelized into a voxel grid, where the spatial positions of the points within each voxel unit are averaged to generate a 3D feature for each voxel unit. Inspired by the widespread success of 3D perception frameworks, scene geometry features are encoded using 3D sparse UNet on a voxel grid, which allows learning from the global context of scene geometry. 3D sparse UNet takes a voxel grid and its 3-dimensional features as input and outputs neural volumetric features. Each occupied voxel is composed of n-dimensional features.

Lidar feature query. For each sample point x along the ray to be rendered, if there are at least K nearby lidar points within the search radius R, its lidar features are queried; otherwise, its lidar features are set to null (i.e. all zeros). Specifically, the Fixed Radius Nearest Neighbor (FRNN) method is used to search for the K nearest lidar point index set related to x, denoted as . Different from the method in [9] that predetermines the ray sampling points before starting the training process, our method is real-time when performing the FRNN search, because as the NeRF training converges, the sample point distribution from the region network will dynamically tend to Focus on the surface. Following the Point-NeRF approach, our method utilizes a multilayer perceptron (MLP) F to map the lidar features of each point into a neural scene description. For the i-th neighboring point of x, F takes the lidar features and relative position as input and outputs the neural scene description as: LiDAR encoding

ϕ

, using the standard inverse distance weighting method to aggregate the neural scene description of its K neighboring pointsLidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

feature fusion for radiative decoding. The lidar code ϕL is concatenated with the hash code ϕh, and a multi-layer perceptron Fα is applied to predict the density α and density embedding h of each sample. Finally, through another multi-layer perceptron Fc, the corresponding color c is predicted based on the spherical harmonic encoding SH and density embedding h in the viewing direction d.

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

#2) Robust depth supervision

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

In addition to feature encoding, lidar points are also derived from Get deep supervision from them. However, due to the sparsity of lidar points, the resulting benefits are limited and insufficient to reconstruct low-texture areas such as pavement. Here, we propose to accumulate adjacent lidar frames to increase density. Although 3D points are able to accurately capture scene structure, occlusion between points needs to be considered when projecting them onto the image plane for depth supervision. Occlusions result from increased displacement between the camera and the lidar and its adjacent frames, resulting in false depth supervision, as shown in Figure 3. Due to the sparse nature of lidar even after accumulation, dealing with this problem is very difficult, making fundamental graphics techniques such as z-buffering unapplicable. In this work, a robust supervision scheme is proposed to automatically filter out spurious deep supervision when training NeRF.

A robust supervision scheme for occlusion awareness. This paper designs a class training strategy so that the model is initially trained using closer and more reliable depth data that is less susceptible to occlusion. As training progresses, the model gradually begins to incorporate further depth data. At the same time, the model also has the ability to discard deep supervision that is unusually far away from its predictions.

Recall that due to the forward motion of the vehicle camera, the training images it produces are sparse and have limited field of view coverage, which poses challenges to NeRF reconstruction, especially when the new view deviates from the vehicle trajectory. Here, we propose to leverage LiDAR to augment training data. First, we color each lidar frame's point cloud by projecting it onto its synchronized camera and interpolating the RGB values. The colored point cloud is accumulated and projected onto a set of synthetically enhanced views, producing the synthetic image and depth map shown in Figure 2. LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

Experimental comparative analysis

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\24)

The above is the detailed content of LidaRF: Studying LiDAR Data for Street View Neural Radiation Fields (CVPR\'24). For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Use ddrescue to recover data on Linux Use ddrescue to recover data on Linux Mar 20, 2024 pm 01:37 PM

DDREASE is a tool for recovering data from file or block devices such as hard drives, SSDs, RAM disks, CDs, DVDs and USB storage devices. It copies data from one block device to another, leaving corrupted data blocks behind and moving only good data blocks. ddreasue is a powerful recovery tool that is fully automated as it does not require any interference during recovery operations. Additionally, thanks to the ddasue map file, it can be stopped and resumed at any time. Other key features of DDREASE are as follows: It does not overwrite recovered data but fills the gaps in case of iterative recovery. However, it can be truncated if the tool is instructed to do so explicitly. Recover data from multiple files or blocks to a single

Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Open source! Beyond ZoeDepth! DepthFM: Fast and accurate monocular depth estimation! Apr 03, 2024 pm 12:04 PM

0.What does this article do? We propose DepthFM: a versatile and fast state-of-the-art generative monocular depth estimation model. In addition to traditional depth estimation tasks, DepthFM also demonstrates state-of-the-art capabilities in downstream tasks such as depth inpainting. DepthFM is efficient and can synthesize depth maps within a few inference steps. Let’s read about this work together ~ 1. Paper information title: DepthFM: FastMonocularDepthEstimationwithFlowMatching Author: MingGui, JohannesS.Fischer, UlrichPrestel, PingchuanMa, Dmytr

How to use Excel filter function with multiple conditions How to use Excel filter function with multiple conditions Feb 26, 2024 am 10:19 AM

If you need to know how to use filtering with multiple criteria in Excel, the following tutorial will guide you through the steps to ensure you can filter and sort your data effectively. Excel's filtering function is very powerful and can help you extract the information you need from large amounts of data. This function can filter data according to the conditions you set and display only the parts that meet the conditions, making data management more efficient. By using the filter function, you can quickly find target data, saving time in finding and organizing data. This function can not only be applied to simple data lists, but can also be filtered based on multiple conditions to help you locate the information you need more accurately. Overall, Excel’s filtering function is a very practical

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks The vitality of super intelligence awakens! But with the arrival of self-updating AI, mothers no longer have to worry about data bottlenecks Apr 29, 2024 pm 06:55 PM

I cry to death. The world is madly building big models. The data on the Internet is not enough. It is not enough at all. The training model looks like "The Hunger Games", and AI researchers around the world are worrying about how to feed these data voracious eaters. This problem is particularly prominent in multi-modal tasks. At a time when nothing could be done, a start-up team from the Department of Renmin University of China used its own new model to become the first in China to make "model-generated data feed itself" a reality. Moreover, it is a two-pronged approach on the understanding side and the generation side. Both sides can generate high-quality, multi-modal new data and provide data feedback to the model itself. What is a model? Awaker 1.0, a large multi-modal model that just appeared on the Zhongguancun Forum. Who is the team? Sophon engine. Founded by Gao Yizhao, a doctoral student at Renmin University’s Hillhouse School of Artificial Intelligence.

Slow Cellular Data Internet Speeds on iPhone: Fixes Slow Cellular Data Internet Speeds on iPhone: Fixes May 03, 2024 pm 09:01 PM

Facing lag, slow mobile data connection on iPhone? Typically, the strength of cellular internet on your phone depends on several factors such as region, cellular network type, roaming type, etc. There are some things you can do to get a faster, more reliable cellular Internet connection. Fix 1 – Force Restart iPhone Sometimes, force restarting your device just resets a lot of things, including the cellular connection. Step 1 – Just press the volume up key once and release. Next, press the Volume Down key and release it again. Step 2 – The next part of the process is to hold the button on the right side. Let the iPhone finish restarting. Enable cellular data and check network speed. Check again Fix 2 – Change data mode While 5G offers better network speeds, it works better when the signal is weaker

The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. The U.S. Air Force showcases its first AI fighter jet with high profile! The minister personally conducted the test drive without interfering during the whole process, and 100,000 lines of code were tested for 21 times. May 07, 2024 pm 05:00 PM

Recently, the military circle has been overwhelmed by the news: US military fighter jets can now complete fully automatic air combat using AI. Yes, just recently, the US military’s AI fighter jet was made public for the first time and the mystery was unveiled. The full name of this fighter is the Variable Stability Simulator Test Aircraft (VISTA). It was personally flown by the Secretary of the US Air Force to simulate a one-on-one air battle. On May 2, U.S. Air Force Secretary Frank Kendall took off in an X-62AVISTA at Edwards Air Force Base. Note that during the one-hour flight, all flight actions were completed autonomously by AI! Kendall said - "For the past few decades, we have been thinking about the unlimited potential of autonomous air-to-air combat, but it has always seemed out of reach." However now,

The first robot to autonomously complete human tasks appears, with five fingers that are flexible and fast, and large models support virtual space training The first robot to autonomously complete human tasks appears, with five fingers that are flexible and fast, and large models support virtual space training Mar 11, 2024 pm 12:10 PM

This week, FigureAI, a robotics company invested by OpenAI, Microsoft, Bezos, and Nvidia, announced that it has received nearly $700 million in financing and plans to develop a humanoid robot that can walk independently within the next year. And Tesla’s Optimus Prime has repeatedly received good news. No one doubts that this year will be the year when humanoid robots explode. SanctuaryAI, a Canadian-based robotics company, recently released a new humanoid robot, Phoenix. Officials claim that it can complete many tasks autonomously at the same speed as humans. Pheonix, the world's first robot that can autonomously complete tasks at human speeds, can gently grab, move and elegantly place each object to its left and right sides. It can autonomously identify objects

See all articles