Home Technology peripherals AI How long will it take for autonomous driving to be realized?

How long will it take for autonomous driving to be realized?

Apr 11, 2023 pm 07:46 PM
technology Autopilot

Recently, there was an article about a serious car accident involving a new force’s vehicle when the “intelligent driving assistance function was turned on” (please forgive me for using such a long sentence to describe this car accident, because I really don’t want to cause any trouble) (any troubles on the road) broke the Internet, making everyone once again pay attention to the technological development of autonomous driving and related social issues.

How long will it take for autonomous driving to be realized?

Regarding this accident, the scene process summarized from the information available on the Internet is roughly like this: Car owner A opened the car on the elevated ACC (Adaptive Cruise) and LCC (Lane Center Assist), driving in the leftmost lane at a speed of 80km/h; suddenly a vehicle stationary in the same lane appeared in front, and there was a person B behind the vehicle; owner A's The vehicle did not brake or avoid, and crashed directly into the stationary vehicle and person B, causing the death of person B behind the stationary vehicle...

According to the Internet, the owner A of the accident said: "I turned on the assisted driving system, but the system did not recognize it. I happened to be distracted at the time."

So, who should bear the responsibility for the accident? The owner of the car that caused the accident? Or the designer and manufacturer of the car?

Although I have never purchased any vehicle with such "advanced" intelligent driving functions, I don't know how the vehicle user manual or user agreement is written, but according to the current regulations of various car companies It is a common practice that when a vehicle performs intelligent driving functions, the owner must be responsible for monitoring road conditions at all times and must be ready to take over the vehicle at all times.

Because, no matter how loudly everyone trumpets it in advertisements, everyone knows very well that current smart driving cannot be called autonomous driving at all, it is still just assisted driving. It can only provide the driver with assistance functions while driving and cannot achieve the purpose of replacing the driver.

According to Article 51 of the "Shenzhen Special Economic Zone Intelligent Connected Vehicle Management Regulations", if a driver's intelligent connected vehicle encounters a road traffic safety violation, the public security organ shall The management department will deal with the driver in accordance with the law. Article 54 stipulates that if a traffic accident involving an intelligent connected vehicle causes damage due to defects in the vehicle itself, the driver or owner or manager of the vehicle may request compensation from the manufacturer or seller in accordance with the law after completing the prescribed compensation.

It can be seen from the above regulations that in the event of an accident, the driver is still the first responsible person. If it can be proven that the vehicle itself is defective, he can request compensation from the car company . But, how can an ordinary consumer prove that the vehicle is defective?

We do not want to analyze the cause of this accident in detail-it is the design of the vehicle itself. Defects are still the driver's responsibility; I just want to analyze the current status of intelligent driving starting from this accident.

According to the SAE J3016 autonomous driving classification standard, at levels below L3, although operations such as steering, acceleration, and deceleration can be handled by the vehicle’s automatic autonomous driving system, the human driver is still responsible. Monitor all conditions on the road surface. In other words, at levels below L3, the autonomous driving system is only an assist, and the driver has full responsibility for the safe operation of the vehicle.

According to this division of responsibilities, when the current autonomous driving technology is far from mature, various L2.5, L2.9, L2.9, etc. A rather "creative" naming method with Chinese characteristics.

Every OEM is playing around with things, and no one dares to say that their autonomous driving system can reach L3. Because once it is declared to be L3, the responsibility for accidents when the vehicle is in L3 autonomous driving state needs to be borne by the car company.

In this state, on the one hand, everyone has to compete with each other in terms of technical strength and continuously introduce more advanced autonomous driving functions in the hope of selling more cars; on the other hand, I dare not cross the border of L3 even half a step. Because, as long as it is not L3, then all accidents have nothing to do with you. At least it is clearly stated in the user's manual that the driver has the responsibility to be ready to take over the vehicle at any time.

However, in fact, let us think about this from another angle. When you participate in a meeting that does not require you to speak or record content, will you doze off, look at your mobile phone, In a daze... The current assisted driving function of L2. The system will alarm and remind you to monitor the road attentively.

This situation is as if you have found a full-time driver, and then you have to supervise him at all times while he is driving. When a dangerous situation occurs, if the professional driver fails to take action, you must intervene in time, otherwise you will be responsible for the accident.

Do you think this situation is a bit anti-human? Do people who buy cars with self-driving functions like to be driving school instructors? If self-driving functions require our full attention, and it is difficult for us to maintain our full attention all the time, are these functions still so meaningful?

I am not someone who is against autonomous driving technology. On the contrary, I strongly support the development of autonomous driving technology. I believe that when autonomous driving technology matures in the future, everyone will be able to save a lot of time and energy, and the number of traffic accidents will be greatly reduced. However, current autonomous driving is far from mature enough to be popularized on a large scale. Not only is it far away from high-level autonomous driving, but even basic auxiliary functions such as AEB, LKA and parking cannot be 100% reliable.

How long will it take for autonomous driving to be realized?

## (Image source: SAE International)

In 1918, Scientific American magazine published an article titled “A Motorist’s Dream: A Car Controlled by a Set of Keys "Car" image (pictured below), showing a self-driving tram. The article believes that "...in the future, cars with steering wheels will be as obsolete as today's cars with hand pumps!"

How long will it take for autonomous driving to be realized?

(Picture source: Scientific American)

In the past nearly a hundred years, humans have been interested in autonomous driving. With persistent pursuits and unrealistic fantasies, I always feel that autonomous driving will be achievable in 20 years. Unfortunately, until today, no one can accurately say when a car that completely eliminates the steering wheel will be able to drive on the road. And as people get closer to this dream, they are increasingly aware of the difficulty and complexity of achieving fully autonomous driving.

The realization of autonomous driving does not rely on the vehicle itself, but on the efforts of the entire transportation system. It is not just relying on OEM and autonomous driving suppliers, but relying on the common progress of all fields of society.

The following are some inspired suggestions:

1. Improve the standards for the accident data recording system of the intelligent driving system.

Although the country already has relevant standards for EDR (Event Data Recorder, automobile event data recording system), it can only record basic information about the vehicle. At this stage, whether intelligent driving fails or not is still decided by the car companies, and there is no effective third-party supervision. Because, with the massive amount of complex data, no one except the car companies and their suppliers can tell exactly what happened. The fundamental reason for this phenomenon is that there is no more detailed national standard with relevant content. What is the meaning behind each data, which data must be stored at what time, etc. These requirements are still lacking. It is recommended that relevant institutions and associations can refer to the practice of automobile OBD to establish corresponding standards as soon as possible and continuously improve them.

#2. The country should establish a unified driving scene simulation database and simultaneously inject detailed information on traffic accidents.

The improvement of autonomous driving algorithms requires the accumulation of massive data. It is difficult for any single company to complete this level of data accumulation in a short period of time on its own. Only by changing the situation of each enterprise working alone, forming a win-win situation of co-creation, and learning from the practice of open source software, can we gather the strength of the whole country to make our intelligent driving truly at the forefront of the world.

#3. For vehicle access, regulatory authorities should issue more detailed testing and certification standards as soon as possible.

Although intelligent driving is still in the development stage and system solutions vary widely, some basic principles and methods can still be defined at this stage. Moreover, once a nationwide driving scene simulation database is established, it can be used as a benchmark to test all newly admitted vehicles in the cloud. Avoid lengthy road tests and reduce costs.

#4. Find a way to manage OTA that is both efficient and reliable. Due to the rapid iteration of autonomous driving algorithms, car companies must continuously update software through OTA. The current supervision of OTA adopts a filing system, which leads to the possibility that the functions and performance of vehicles that were originally admitted may undergo major changes after OTA. How to find an efficient and reliable OTA management method is a major challenge facing the current automotive industry supervision.

Finally, I would like to express my sincere blessings to those companies that have invested their money in autonomous driving. I hope you all have enough resources and patience to survive the long night and see the light of dawn!

The above is the detailed content of How long will it take for autonomous driving to be realized?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
1 months ago By 尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Why is Gaussian Splatting so popular in autonomous driving that NeRF is starting to be abandoned? Why is Gaussian Splatting so popular in autonomous driving that NeRF is starting to be abandoned? Jan 17, 2024 pm 02:57 PM

Written above & the author’s personal understanding Three-dimensional Gaussiansplatting (3DGS) is a transformative technology that has emerged in the fields of explicit radiation fields and computer graphics in recent years. This innovative method is characterized by the use of millions of 3D Gaussians, which is very different from the neural radiation field (NeRF) method, which mainly uses an implicit coordinate-based model to map spatial coordinates to pixel values. With its explicit scene representation and differentiable rendering algorithms, 3DGS not only guarantees real-time rendering capabilities, but also introduces an unprecedented level of control and scene editing. This positions 3DGS as a potential game-changer for next-generation 3D reconstruction and representation. To this end, we provide a systematic overview of the latest developments and concerns in the field of 3DGS for the first time.

How to solve the long tail problem in autonomous driving scenarios? How to solve the long tail problem in autonomous driving scenarios? Jun 02, 2024 pm 02:44 PM

Yesterday during the interview, I was asked whether I had done any long-tail related questions, so I thought I would give a brief summary. The long-tail problem of autonomous driving refers to edge cases in autonomous vehicles, that is, possible scenarios with a low probability of occurrence. The perceived long-tail problem is one of the main reasons currently limiting the operational design domain of single-vehicle intelligent autonomous vehicles. The underlying architecture and most technical issues of autonomous driving have been solved, and the remaining 5% of long-tail problems have gradually become the key to restricting the development of autonomous driving. These problems include a variety of fragmented scenarios, extreme situations, and unpredictable human behavior. The "long tail" of edge scenarios in autonomous driving refers to edge cases in autonomous vehicles (AVs). Edge cases are possible scenarios with a low probability of occurrence. these rare events

Choose camera or lidar? A recent review on achieving robust 3D object detection Choose camera or lidar? A recent review on achieving robust 3D object detection Jan 26, 2024 am 11:18 AM

0.Written in front&& Personal understanding that autonomous driving systems rely on advanced perception, decision-making and control technologies, by using various sensors (such as cameras, lidar, radar, etc.) to perceive the surrounding environment, and using algorithms and models for real-time analysis and decision-making. This enables vehicles to recognize road signs, detect and track other vehicles, predict pedestrian behavior, etc., thereby safely operating and adapting to complex traffic environments. This technology is currently attracting widespread attention and is considered an important development area in the future of transportation. one. But what makes autonomous driving difficult is figuring out how to make the car understand what's going on around it. This requires that the three-dimensional object detection algorithm in the autonomous driving system can accurately perceive and describe objects in the surrounding environment, including their locations,

This article is enough for you to read about autonomous driving and trajectory prediction! This article is enough for you to read about autonomous driving and trajectory prediction! Feb 28, 2024 pm 07:20 PM

Trajectory prediction plays an important role in autonomous driving. Autonomous driving trajectory prediction refers to predicting the future driving trajectory of the vehicle by analyzing various data during the vehicle's driving process. As the core module of autonomous driving, the quality of trajectory prediction is crucial to downstream planning control. The trajectory prediction task has a rich technology stack and requires familiarity with autonomous driving dynamic/static perception, high-precision maps, lane lines, neural network architecture (CNN&GNN&Transformer) skills, etc. It is very difficult to get started! Many fans hope to get started with trajectory prediction as soon as possible and avoid pitfalls. Today I will take stock of some common problems and introductory learning methods for trajectory prediction! Introductory related knowledge 1. Are the preview papers in order? A: Look at the survey first, p

The Stable Diffusion 3 paper is finally released, and the architectural details are revealed. Will it help to reproduce Sora? The Stable Diffusion 3 paper is finally released, and the architectural details are revealed. Will it help to reproduce Sora? Mar 06, 2024 pm 05:34 PM

StableDiffusion3’s paper is finally here! This model was released two weeks ago and uses the same DiT (DiffusionTransformer) architecture as Sora. It caused quite a stir once it was released. Compared with the previous version, the quality of the images generated by StableDiffusion3 has been significantly improved. It now supports multi-theme prompts, and the text writing effect has also been improved, and garbled characters no longer appear. StabilityAI pointed out that StableDiffusion3 is a series of models with parameter sizes ranging from 800M to 8B. This parameter range means that the model can be run directly on many portable devices, significantly reducing the use of AI

SIMPL: A simple and efficient multi-agent motion prediction benchmark for autonomous driving SIMPL: A simple and efficient multi-agent motion prediction benchmark for autonomous driving Feb 20, 2024 am 11:48 AM

Original title: SIMPL: ASimpleandEfficientMulti-agentMotionPredictionBaselineforAutonomousDriving Paper link: https://arxiv.org/pdf/2402.02519.pdf Code link: https://github.com/HKUST-Aerial-Robotics/SIMPL Author unit: Hong Kong University of Science and Technology DJI Paper idea: This paper proposes a simple and efficient motion prediction baseline (SIMPL) for autonomous vehicles. Compared with traditional agent-cent

nuScenes' latest SOTA | SparseAD: Sparse query helps efficient end-to-end autonomous driving! nuScenes' latest SOTA | SparseAD: Sparse query helps efficient end-to-end autonomous driving! Apr 17, 2024 pm 06:22 PM

Written in front & starting point The end-to-end paradigm uses a unified framework to achieve multi-tasking in autonomous driving systems. Despite the simplicity and clarity of this paradigm, the performance of end-to-end autonomous driving methods on subtasks still lags far behind single-task methods. At the same time, the dense bird's-eye view (BEV) features widely used in previous end-to-end methods make it difficult to scale to more modalities or tasks. A sparse search-centric end-to-end autonomous driving paradigm (SparseAD) is proposed here, in which sparse search fully represents the entire driving scenario, including space, time, and tasks, without any dense BEV representation. Specifically, a unified sparse architecture is designed for task awareness including detection, tracking, and online mapping. In addition, heavy

Let's talk about end-to-end and next-generation autonomous driving systems, as well as some misunderstandings about end-to-end autonomous driving? Let's talk about end-to-end and next-generation autonomous driving systems, as well as some misunderstandings about end-to-end autonomous driving? Apr 15, 2024 pm 04:13 PM

In the past month, due to some well-known reasons, I have had very intensive exchanges with various teachers and classmates in the industry. An inevitable topic in the exchange is naturally end-to-end and the popular Tesla FSDV12. I would like to take this opportunity to sort out some of my thoughts and opinions at this moment for your reference and discussion. How to define an end-to-end autonomous driving system, and what problems should be expected to be solved end-to-end? According to the most traditional definition, an end-to-end system refers to a system that inputs raw information from sensors and directly outputs variables of concern to the task. For example, in image recognition, CNN can be called end-to-end compared to the traditional feature extractor + classifier method. In autonomous driving tasks, input data from various sensors (camera/LiDAR

See all articles