


Let's talk about several large models and autonomous driving concepts that have become popular recently.
Recently, various applications of large models are still popular. Around the beginning of October, a series of rather gimmicky articles appeared, trying to apply large models to autonomous driving. I have been talking about a lot of related topics with many friends recently. When writing this article, on the one hand, I discovered that including myself, we have actually confused some very related but actually different concepts in the past. On the other hand, it is an extension of these concepts. There are some interesting thoughts that are worth sharing and discussing with everyone.
Large (Language) Model
This is undoubtedly the most popular direction at present, and it is also the focus of the most concentrated papers. How can large language models help autonomous driving? On the one hand, like GPT-4V, it provides extremely powerful semantic understanding capabilities through alignment with images, which will not be mentioned here for the time being; on the other hand, it uses LLM as an agent to directly implement driving behavior. The latter is actually the most sexy research direction at present, and is inextricably linked to the series of work on embedded AI.
Most of the latter type of work seen so far uses LLM: 1) directly used 2) fine-tuned through supervised learning 3) fine-tuned through reinforcement learning for driving tasks. In essence, there is no escape from the previous paradigm framework of driving based on learning methods. In fact, a very direct question is, why might it be better to use LLM to do this? Intuitively speaking, using words to drive is an inefficient and verbose thing. Then one day I suddenly figured out that LLM actually implements a pretrain for the agent through language! One of the important reasons why RL was difficult to generalize before was that it was difficult to unify various tasks and use various common data to pretrain. Each task could only be trained from scratch, but LLM is very good. Solved this problem. But in fact, there are several problems that are not well solved: 1) After completing pretrain, must the language be retained as the output interface? This actually brings a lot of inconvenience to many tasks, and also causes redundant calculations to a certain extent. 2) The approach of LLM as agent still does not overcome the essential problems of the existing RL model free method, and all the problems of model free methods still exist. Recently, we have also seen some attempts at model based LLM as agent, which may be an interesting direction.
The last thing I want to complain about in each paper is: It’s not just connecting to LLM and letting LLM output a reason to make your model interpretable. This reason may still be nonsense. . . Things that were not guaranteed before will not become guaranteed just because a sentence is output.
Large (Visual) Model
In fact, the purely large visual model still has not seen that magical "emergence" moment. When talking about large visual models, there are generally two possible references: one is a super visual information feature extractor based on massive web data pre-training such as CLIP or DINO or SAM, which greatly improves the semantic understanding ability of the model; The other refers to the joint model of pairs (image, action, etc...) implemented by the world model represented by GAIA.
In fact, I think the former is just the result of continuing linear scale up along the traditional thinking. It is difficult to see the possibility of changing the amount of autonomous driving at present. In fact, the latter has continuously entered the field of vision of researchers due to the continuous publicity of Wayve and Tesla this year. When people talk about world models, they often include the fact that the model is end-to-end (directly outputs actions) and is related to LLM. In fact, this assumption is one-sided. My understanding of the world model is also very limited. I would like to recommend Lecun’s interview and @Yu Yang’s model based RL survey, which I will not expand on:
Yu Yang: About the environment model (world model) learning
https://www.php.cn/link/a2cdd86a458242d42a17c2bf4feff069
Pure visual autonomous driving
This is actually It is easy to understand that it refers to an autonomous driving system that relies only on visual sensors. This is actually the best and ultimate wish of autonomous driving: to drive with a pair of eyes like a human being. Such concepts are generally associated with the above two large models, because the complex semantics of images require strong abstraction capabilities to extract useful information. Under Tesla's recent continuous publicity offensive, this concept also overlaps with the end-to-end mentioned below. But in fact, there are many ways to achieve pure visual driving, and end-to-end is naturally one of them, but it is not the only one. The most difficult problem in realizing purely visual autonomous driving is that vision is inherently insensitive to 3D information, and large models have not essentially changed this. Specifically reflected in: 1) The way of passively receiving electromagnetic waves makes vision unlike other sensors that can measure geometric information in 3D space; 2) Perspective makes distant objects extremely sensitive to errors. This is very unfriendly for downstream planning and control, which are implemented in an equal-error 3D space by default. However, is driving by vision the same as being able to accurately estimate 3D distance and speed? I think this is a representation issue worthy of in-depth study in pure visual autonomous driving in addition to semantic understanding.
End-to-end automatic driving
This concept refers to the control signal from the sensor to the final output (in fact, I think it can also be broadly included to the waypoints of the upstream layer planning information) using a jointly optimized model. This can either be a direct end-to-end method that inputs sensor data like ALVINN as early as the 1980s and outputs control signals directly through a neural network, or it can be a staged end-to-end method like this year's CVPR best paper UniAD. However, a common point of these methods is that the downstream supervision signal can be directly passed to the upstream, instead of each module having its own self-defined optimization goals. Overall, this is a correct idea. After all, deep learning relies on such joint optimization to make its fortune. However, for systems such as autonomous driving or general-purpose robots, which are often extremely complex and deal with the physical world, there are many problems that need to be overcome in terms of engineering implementation and data organization and utilization efficiency.
Feed-Forward End-to-End Autonomous Driving
This concept seems to be rarely mentioned, but in fact I find that the existence of end-to-end itself is valuable. But the problem lies in the way of using Feed-Forward to observe. Including me, in fact, I have always defaulted that end-to-end driving must be in the form of Feed-Forward, because 99% of current deep learning-based methods assume such a structure, which means that the final output of concern (such as control signals )u = f(x), x is the various observations of the sensor. Here f can be a very complex function. But in fact, in some problems, we hope to make the final output satisfy or be close to certain properties, so it is difficult for the Feed-Forward form to give such a guarantee. So there is another way we can write u* = argmin g(u, x) s.t. h(u, x)
With the development of large models, this direct Feed-Forward end-to-end autonomous driving solution has ushered in a wave of revival. Of course, large models are very powerful, but I raise a question and hope everyone will think about it: If the large model is omnipotent end-to-end, does that mean that the large model should be able to play Go/Gobang end-to-end? Woolen cloth? Paradigms like AlphaGo should be meaningless? I believe everyone knows that the answer is no. Of course, this Feed-Forward method can be used as a fast approximate solver and achieve good results in most scenarios.
Judging from the current plans of various companies that have disclosed their use of Neural Planner, the neural part only provides a number of initialization proposals for subsequent optimization plans to alleviate the problem of highly non-convexity in subsequent optimization. This is essentially the same thing as fast rollout in AlphaGo. But AlphaGo will not call the subsequent MCTS search a "cover-up" solution. . . Finally, I hope this can help everyone clarify the differences and connections between these concepts, and that everyone can clearly understand what they are talking about when discussing issues. . .The above is the detailed content of Let's talk about several large models and autonomous driving concepts that have become popular recently.. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Yesterday during the interview, I was asked whether I had done any long-tail related questions, so I thought I would give a brief summary. The long-tail problem of autonomous driving refers to edge cases in autonomous vehicles, that is, possible scenarios with a low probability of occurrence. The perceived long-tail problem is one of the main reasons currently limiting the operational design domain of single-vehicle intelligent autonomous vehicles. The underlying architecture and most technical issues of autonomous driving have been solved, and the remaining 5% of long-tail problems have gradually become the key to restricting the development of autonomous driving. These problems include a variety of fragmented scenarios, extreme situations, and unpredictable human behavior. The "long tail" of edge scenarios in autonomous driving refers to edge cases in autonomous vehicles (AVs). Edge cases are possible scenarios with a low probability of occurrence. these rare events

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Written in front & starting point The end-to-end paradigm uses a unified framework to achieve multi-tasking in autonomous driving systems. Despite the simplicity and clarity of this paradigm, the performance of end-to-end autonomous driving methods on subtasks still lags far behind single-task methods. At the same time, the dense bird's-eye view (BEV) features widely used in previous end-to-end methods make it difficult to scale to more modalities or tasks. A sparse search-centric end-to-end autonomous driving paradigm (SparseAD) is proposed here, in which sparse search fully represents the entire driving scenario, including space, time, and tasks, without any dense BEV representation. Specifically, a unified sparse architecture is designed for task awareness including detection, tracking, and online mapping. In addition, heavy

In the past month, due to some well-known reasons, I have had very intensive exchanges with various teachers and classmates in the industry. An inevitable topic in the exchange is naturally end-to-end and the popular Tesla FSDV12. I would like to take this opportunity to sort out some of my thoughts and opinions at this moment for your reference and discussion. How to define an end-to-end autonomous driving system, and what problems should be expected to be solved end-to-end? According to the most traditional definition, an end-to-end system refers to a system that inputs raw information from sensors and directly outputs variables of concern to the task. For example, in image recognition, CNN can be called end-to-end compared to the traditional feature extractor + classifier method. In autonomous driving tasks, input data from various sensors (camera/LiDAR
