New BEV LV Fusion Solution: Lift-Attend-Splat Beyond BEVFusion
Paper: Lift-Attend-Splat method for fusion of bird's-eye view camera and lidar using Transformer technology
Please click the link to view the file: https://arxiv.org/pdf/2312.14919.pdf
For safety-critical applications such as autonomous driving, combining complementary sensor modalities is crucial. Recent autonomous driving camera-lidar fusion methods use monocular depth estimation to improve perception, but this is a difficult task compared to directly using depth information from lidar. Our study finds that this approach does not fully exploit depth information and demonstrates that naively improving depth estimation does not improve object detection performance. Surprisingly, removing depth estimation entirely does not degrade object detection performance, suggesting that reliance on monocular depth may be an unnecessary architectural bottleneck during camera-lidar fusion. This study proposes a new fusion method that completely bypasses monocular depth estimation and instead utilizes a simple attention mechanism to select and fuse camera and lidar features in a BEV grid. The results show that the proposed model is able to adjust its use of camera features based on the availability of lidar features and has better 3D detection performance on the nuScenes dataset than baseline models based on monocular depth estimation
This study introduces a new camera-lidar fusion method called "Lift Attented Splat". This method avoids monocular depth estimation and instead utilizes a simple transformer to select and fuse camera and lidar features in BEV. Experiments prove that compared with methods based on monocular depth estimation, this research method can better utilize cameras and improve object detection performance. The contributions of this study are as follows:
The camera-lidar fusion method based on the Lift Splat paradigm does not utilize depth as expected. In particular, we show that they perform equally well or better if monocular depth prediction is completely removed.- This paper introduces a new camera-lidar fusion method that uses a simple attention mechanism to fuse camera and lidar features in pure BEV. The paper demonstrates that it can better utilize cameras and improve 3D detection performance compared to models based on the Lift Splat paradigm.
The accuracy of depth prediction is usually low. Qualitative and quantitative analyzes can be performed by comparing the depth quality predicted by BEVFusion with lidar depth maps using absolute relative error (Abs.Rel.) and root mean square error (RMSE). As shown in Figure 1, the depth prediction does not accurately reflect the structure of the scene and is significantly different from the lidar depth map, which indicates that monocular depth is not fully utilized as expected. The study also found that improving depth prediction does not improve object detection performance! Completely canceling depth prediction has no impact on object detection performance
We propose a camera-lidar fusion method that completely bypasses monocular depth estimation while It uses a simple transformer to fuse camera and lidar features in a bird's-eye view. However, due to the large number of camera and lidar features and the quadratic nature of attention, the transformer architecture is difficult to be simply applied to the camera-lidar fusion problem. When projecting camera features in BEV, the geometry of the problem can be used to significantly limit the scope of attention, since camera features should only contribute to the position along their corresponding rays. We apply this idea to the case of camera-lidar fusion and introduce a simple fusion method that uses cross-attention between columns in the camera plane and polar rays in the lidar BEV grid! Instead of predicting monocular depth, cross-attention learns which camera features are most salient in the context provided by lidar features along their rays
Our model has similarities to methods based on the Lift Splat paradigm The overall architecture, except for projecting camera features in BEV. As shown in the figure below, it consists of a camera and lidar backbone, a module that independently generates each modal feature, a projection and fusion module that embeds the camera features into the BEV and fuses them with the lidar, and a detection head. When considering target detection, the final output of the model is the attributes of the target in the scene, including position, dimension, direction, speed and classification information, represented in the form of a 3D bounding box
Lift Attented Splat Camera Lidar Fusion Architecture As follows. (Left) Overall architecture: Features from the camera and lidar backbone are fused together before being passed to the detection head. (inset) The geometry of our 3D projection: The “Lift” step embeds the lidar BEV features into the projected horizon by using bilinear sampling to lift the lidar features along the z-direction. The "splat" step corresponds to the inverse transformation, as it uses bilinear sampling to project the features from the projected horizon back to the BEV grid, again along the z direction! On the right are the details of the project module.
Experimental results
The above is the detailed content of New BEV LV Fusion Solution: Lift-Attend-Splat Beyond BEVFusion. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Yesterday during the interview, I was asked whether I had done any long-tail related questions, so I thought I would give a brief summary. The long-tail problem of autonomous driving refers to edge cases in autonomous vehicles, that is, possible scenarios with a low probability of occurrence. The perceived long-tail problem is one of the main reasons currently limiting the operational design domain of single-vehicle intelligent autonomous vehicles. The underlying architecture and most technical issues of autonomous driving have been solved, and the remaining 5% of long-tail problems have gradually become the key to restricting the development of autonomous driving. These problems include a variety of fragmented scenarios, extreme situations, and unpredictable human behavior. The "long tail" of edge scenarios in autonomous driving refers to edge cases in autonomous vehicles (AVs). Edge cases are possible scenarios with a low probability of occurrence. these rare events

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Written in front & starting point The end-to-end paradigm uses a unified framework to achieve multi-tasking in autonomous driving systems. Despite the simplicity and clarity of this paradigm, the performance of end-to-end autonomous driving methods on subtasks still lags far behind single-task methods. At the same time, the dense bird's-eye view (BEV) features widely used in previous end-to-end methods make it difficult to scale to more modalities or tasks. A sparse search-centric end-to-end autonomous driving paradigm (SparseAD) is proposed here, in which sparse search fully represents the entire driving scenario, including space, time, and tasks, without any dense BEV representation. Specifically, a unified sparse architecture is designed for task awareness including detection, tracking, and online mapping. In addition, heavy

In the past month, due to some well-known reasons, I have had very intensive exchanges with various teachers and classmates in the industry. An inevitable topic in the exchange is naturally end-to-end and the popular Tesla FSDV12. I would like to take this opportunity to sort out some of my thoughts and opinions at this moment for your reference and discussion. How to define an end-to-end autonomous driving system, and what problems should be expected to be solved end-to-end? According to the most traditional definition, an end-to-end system refers to a system that inputs raw information from sensors and directly outputs variables of concern to the task. For example, in image recognition, CNN can be called end-to-end compared to the traditional feature extractor + classifier method. In autonomous driving tasks, input data from various sensors (camera/LiDAR
