In order to capture the surrounding environment more accurately and provide performance redundancy, autonomous vehicles are equipped with a large number of complementary sensors, including millimeter wave radar, cameras, lidar, infrared thermal imaging, ultrasonic radar, etc. In order to give full play to the respective advantages of different sensors, high-end intelligent driving perception systems are bound to evolve in the direction of deep fusion of multiple sensors.
Through the fusion of multiple sensors, the autonomous driving system can obtain a more accurate result model, thereby improving the safety and reliability of the autonomous driving system. sex. For example, millimeter-wave radar can make up for the disadvantages of cameras that are affected by rainy days and can identify obstacles that are relatively far away, but cannot identify the specific shape of obstacles; lidar can make up for the shortcomings of millimeter-wave radar that cannot identify the specific shape of obstacles. Therefore, in order to fuse the external data collected from different sensors to provide a basis for the controller to make decisions, it is necessary to process the multi-sensor fusion algorithm to form a panoramic perception.
The following is an introduction to the three key sensors for achieving high-level autonomous driving: 4D millimeter wave radar, lidar and infrared thermal imaging.
Millimeter wave radar can be said to be the earliest sensor used in mass production of autonomous driving, although its accuracy is not as high as lidar , but it is still at a high level among many sensor categories. It has strong penetration ability into fog, smoke, and dust. It performs better overall under severe weather conditions. It mainly exists as a ranging and speed sensor. Currently, the number of millimeter wave radars installed on bicycles is still at a low level. From January to August 2022, new passenger cars were delivered with only 0.86 millimeter-wave radars per vehicle.
This is not to say that the performance of traditional millimeter-wave radar is not excellent. For L2-level cars, the stable point cloud collection brought by the high resolution of millimeter-wave radar is necessary for the vehicle to complete 360° The key to environmental awareness. But this is not enough. For L3, L4 and above models, the perception accuracy and fusion effect are greatly reduced. As 4D millimeter-wave radars begin to be put into vehicles this year, 2023 will be the year when large-scale front-mounted mass production truly enters. According to Yelo's forecast, the global 4D millimeter wave radar market will reach US$3.5 billion by 2027.
Currently, the application of 4D imaging radar on the market is mainly in two directions. One is to replace the traditional low-resolution forward radar to meet the multi-sensory fusion performance of high-end intelligent driving. improvement. The second main application scenario is the traveling and parking integrated 4D surround high-resolution (divided into point cloud enhancement and imaging) radar, whose performance will be slightly lower than that of the forward radar.
Since this year, "Lidar on the car" has become the latest "label" for automobile intelligence. At the Guangzhou Auto Show , including Xpeng G9, WM7, Nezha S, Salon Mecha Dragon and other more and more models are equipped with lidar. Compared with ordinary radar, lidar has the advantages of high resolution, good concealment, and strong anti-interference ability. It is likened to the "eyes" of autonomous vehicles. It determines the evolutionary level of the autonomous driving industry and is the key to realizing the implementation of autonomous driving. The last mile is an extremely important part of the journey.
Lidar has irreplaceable advantages in high-level autonomous driving that has strict requirements on information accuracy. At present, whether it is new car-making forces, traditional OEMs, or Internet companies, they are all making arrangements, which has led to a sudden increase in demand for lidar production capacity. According to statistics from Zuosi Auto Research, the installation volume of lidar in new domestic passenger cars in H1 will reach 24,700 in 2022; in the second half of 2022, more than 10 new lidar cars will be delivered in China, including Xpeng G9, WM7, etc. , will significantly increase the number of lidars on the vehicle, and the total installed volume is expected to exceed 80,000 units throughout the year.
Compared with traditional CIS and lidar, infrared thermal imaging can be used in high dynamic range, rainy days, foggy days, dark days, etc. The advantages are obvious in various scenarios such as light and sandstorms, and the introduction of high-level autonomous driving solutions is an inevitable trend. Infrared thermal imaging equipment with integrated infrared detectors is particularly suitable for distinguishing pedestrians and other inanimate obstacles due to its ability to detect heat. It has advantages that other sensors do not have. It is not affected by rain, fog, haze and light conditions, and the observation distance can be up to several hundred meters. In the future, it will occupy a place in the field of autonomous driving.
Previously, the main reason why infrared thermal imaging failed to achieve "onboard use" was that the price remained high. In recent years, with the localization of key raw materials such as infrared thermal imaging chips, costs have dropped, and they have been widely used in the civilian field. Autonomous driving will quickly open up the scale of the infrared detector market. According to data from the China Industrial Research Institute, China's infrared thermal imaging camera market size will reach US$6.68 billion in 2020, and is expected to continue to grow at a compound annual growth rate of 10.8% in 2021. It is expected that China's infrared thermal imaging camera market size will reach US$12.34 billion in 2025 Dollar.
Conclusion: Multi-sensor fusion autonomous driving solutions are an inevitable trend in future automobile development. Fusion of multiple sensor information can make up for the limitations of a single sensor and improve the safety redundancy and data reliability of the autonomous driving system. However, each sensor has different coordinate systems, different data forms, and even different collection frequencies, so the design of the fusion algorithm is not a simple task.
The above is the detailed content of Three keys to achieving high-level autonomous driving under the trend of multi-sensor fusion. For more information, please follow other related articles on the PHP Chinese website!