Table of Contents
Research background and significance
V2Xverse: Vehicle-road collaborative driving simulation platform
CoDriving: End-to-end self-driving model for efficient collaboration
End-to-end autonomous driving network
Driving-oriented collaboration strategy
Experimental results
Summary
Home Technology peripherals AI Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X

Jun 10, 2024 pm 12:42 PM
Model Open source

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X

Synchronized driving data of vehicle-road collaboration

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X

Autonomous driving V2X-AD (Vehicle- to-everything-aided autonomous driving) has great potential in providing safer driving strategies. Researchers have conducted a lot of research on the communication and communication aspects of V2X-AD, but the effect of these infrastructure and communication resources in improving driving performance has not been fully explored. This highlights the need to study collaborative autonomous driving, that is, how to design efficient information sharing strategies for driving planning to improve the driving performance of each vehicle. This requires two key basic conditions: one is a platform that can provide a data environment for V2X-AD, and an end-to-end driving system with complete driving-related functions and information sharing mechanisms. In terms of the platform that provides the data environment, it can be achieved by utilizing the communication network between vehicles and the support of the infrastructure. In this way, vehicles can share real-time and environmental information needed for driving, thereby improving driving performance. On the other hand, end-to-end driving systems need to have complete driving functions and be able to share information. This means that the driving system should be able to obtain driving-related information from other vehicles and infrastructure and combine this information with its own driving planning to provide more efficient driving performance. While achieving these two basic conditions, security and privacy protection also need to be considered. Therefore, when designing the driving planning strategy of V2X-AD, we should pay attention to the efficiency of the information sharing strategy and thereby improve the driving performance of each vehicle. To sum up, vehicle-road collaborative assisted autonomous driving V2X-AD has huge potential

" For this reason, researchers from Shanghai Jiao Tong University and Shanghai Artificial Intelligence Laboratory published a new research article "Towards Collaborative Autonomous Driving: Simulation Platform and End-to-End System" proposes CoDriving: an end-to-end collaborative driving system that uses an information sharing strategy for driving planning to achieve efficient communication and collaboration. A simulation platform V2Xverse was built, which provides a complete training and testing environment for collaborative driving, including the generation of vehicle-road collaborative driving data sets, the deployment of full-stack collaborative driving systems, and closed-loop driving performance evaluation and driving tasks in customizable scenarios. Evaluation. "

At the same time, the simulation platform V2Xverse integrates the training and deployment test codes of multiple existing collaborative sensing methods, using a variety of test tasks to test comprehensive driving capabilities: 3D target detection, path planning, and loop closure. Autopilot. V2Xverse breaks through the limitations of existing collaborative sensing methods that can only "see" but not "control". It supports embedding existing collaborative sensing methods into a complete driving system and testing driving performance in a simulation environment. The researchers of this article believe that this will bring better functional extensions and a test benchmark that is more suitable for actual driving scenarios for vision-based vehicle-road collaboration research in autonomous driving.

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X

  • Paper link: https://arxiv.org/pdf/2404.09496
  • Code link: https://github.com/CollaborativePerception /V2Xverse

Research background and significance

The research of this article focuses on collaborative autonomous driving based on V2X (Vehicle-to-everything) communication. Compared with single-vehicle autonomous driving, collaborative autonomous driving improves vehicle perception and driving performance through information exchange between the vehicle and the surrounding environment (such as roadside units, pedestrians equipped with smart devices, etc.), which will benefit people with limited vision. Safe driving in complex scenarios (Figure 1).

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2XFigure 1. Dangerous "ghost probe" scene, the bicycle cannot sense the occluded object

Currently, V2X-based vehicle-road collaborative work mostly focuses on optimizing module-level perception capabilities. However, how to use cooperative sensing capabilities to improve the final driving performance in integrated systems is still underexplored.

In order to solve this problem, this article aims to expand the collaborative sensing capability into a collaborative driving system covering comprehensive driving capabilities, including key modules such as perception, prediction, planning and control. Achieving collaborative autonomous driving requires two key foundations: a platform that can provide a data environment for V2X-AD; and the second is an end-to-end driving system that integrates complete driving-related functions and information sharing mechanisms. From a platform perspective, this work builds V2Xverse, a comprehensive collaborative autonomous driving simulation platform that provides a complete process from the generation of vehicle-road collaborative driving data sets to the deployment of full-stack collaborative driving systems and closed-loop driving performance evaluation. From the perspective of the driving system, this article introduces CoDriving, a new end-to-end collaborative driving system that designs and embeds a V2X communication-based collaboration module in a complete autonomous driving framework to improve collaborative driving performance by sharing sensory information. . The core idea of ​​CoDriving is a new information sharing strategy for driving planning, which uses spatially sparse but important visual feature information for driving as communication content to optimize communication efficiency while improving driving performance.

V2Xverse: Vehicle-road collaborative driving simulation platform

The key feature of the V2Xverse proposed in this article is the ability to achieve offline benchmark generation of driving-related subtasks and in different scenarios Online closed-loop evaluation of driving performance fully supports the development of collaborative autonomous driving systems. In order to create a V2X-AD scene, V2Xverse sets up multiple smart cars equipped with complete driving capabilities in the scene, and places roadside units on both sides of the road through certain strategies to provide supplementary vision for the smart cars. In order to support the development of collaborative autonomous driving methods, V2Xverse first provides (vehicle-vehicle) and (vehicle-roadside unit) communication modules, and provides complete driving signals and expert annotations for system training, and also provides closed-loop driving evaluation A variety of dangerous scenarios. The simulation platform framework is shown in Figure 2.

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2XFigure 2. V2Xverse simulation platform framework

Compared with the existing Carla-based autonomous driving simulation platform, V2Xverse has three advantages. First of all, V2Xverse supports multi-vehicle driving simulation, while the mainstream carla-leaderboard and its derivative platforms only support single-vehicle driving simulation. Second, V2Xverse supports full driving function simulation, while the existing collaborative perception simulation platform only supports functions related to the perception module. Third, V2Xverse supports comprehensive V2X-AD scenarios, including diverse sensor devices, model integration and flexible scenario customization; see Table 1.

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2XTable 1. Comparison between V2Xverse and existing Carla-based autonomous driving simulation platform

CoDriving: End-to-end self-driving model for efficient collaboration

CoDriving consists of two components (see Figure 3): 1) End-to-end single-vehicle autonomous driving network, which converts sensor inputs into driving control signals; 2) Driving-oriented collaboration, where collaborators share information critical to driving Perceptual features are used to achieve efficient communication, and the perceptual features of bicycle BEVs are enhanced through feature aggregation. The enhanced perceptual features will help the system produce more accurate perceptual identification results and planning prediction results.

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2XFigure 3. The overall framework of CoDriving

End-to-end autonomous driving network

The end-to-end single-vehicle autonomous driving network is based on the Modal inputs are used to learn output waypoint predictions, and a control module converts the waypoints into driving control signals. To achieve this, CoDriving integrates the modular components required for driving into an end-to-end system, including 3D object detectors, waypoint predictors and controllers. CoDriving uses Bird's Eye View (BEV) representation because it provides a unified global coordinate system, avoids complex coordinate transformation, and better supports collaboration based on spatial information.

Driving-oriented collaboration strategy

V2X collaboration solves the inevitable problem of limited visibility of bicycles through information sharing. In this work, this paper proposes a new driving-oriented collaboration strategy to simultaneously optimize driving performance and communication efficiency. The scheme includes i) driving intention-based perception communication, where CoDriving exchanges spatially sparse but driving-critical BEV perception features through a driving request module; and ii) BEV feature enhancement, where CoDriving utilizes the received feature information to enhance the performance of each collaborative vehicle. BEV perception characteristics. The enhanced BEV features will help the system produce more accurate perception recognition results and planning prediction results.

Experimental results

Using the V2Xverse simulation platform, this article tests the performance of CoDriving on three tasks: closed-loop driving, 3D target detection, and waypoint prediction. In the key closed-loop driving test, compared with the previous single-vehicle end-to-end autonomous driving SOTA method, CoDriving's driving score significantly improved by 62.49%, and the pedestrian collision rate dropped by 53.50%. In the target detection and waypoint prediction tasks, CoDriving performs better than other collaborative methods, as shown in Table 2.

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2XTable 2. CoDriving is better than SOTA's single driving method in the closed-loop driving task, and is better than other collaborative sensing methods in the modular perception and planning subtasks

This article At the same time, the collaborative performance of CoDriving under different communication bandwidths was verified. In the three tasks of closed-loop driving, 3D target detection, and waypoint prediction, CoDriving outperformed other collaboration methods under different communication bandwidth restrictions, as shown in Figure 4.

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2XFigure 4. Collaboration performance of CoDriving under different communication bandwidths

Figure 5 shows a driving case of CoDriving in the V2Xverse simulation environment. In the scene in Figure 5, a pedestrian in the blind spot suddenly rushed out of the road. It can be seen that the autonomous driving bicycle had a limited field of vision and was unable to avoid the pedestrian in advance, causing a serious car accident. CoDriving uses the shared vision characteristics of roadside units to detect pedestrians in advance and avoid them safely.

Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X

Figure 5(1). Compared to bicycle self-driving with limited vision, CoDriving uses the information provided by the roadside unit to detect pedestrians in the blind spotOpen source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2XFigure 5 (2). CoDriving successfully avoided pedestrians, but the self-driving bicycle did not avoid the situation in time, causing a collision

Summary

This work helps collaborative autonomous driving by building a simulation platform V2Xverse method development and proposes a new end-to-end self-driving system. Among them, V2Xverse is a V2X collaborative driving simulation platform that supports closed-loop driving testing. This platform provides a complete development channel for the development of collaborative autonomous driving systems with the goal of improving final driving performance. It is worth mentioning that V2Xverse also supports the deployment of a variety of existing single-vehicle autonomous driving systems, as well as the training and closed-loop driving testing of a variety of existing collaborative sensing methods. At the same time, this paper proposes a new end-to-end collaborative autonomous driving system CoDriving, which improves driving performance and optimizes communication efficiency by sharing key driving perception information. A comprehensive evaluation of the entire driving system shows that CoDriving is significantly better than the single-vehicle self-driving system in different communication bandwidths. The researchers of this article believe that the V2Xverse platform and CoDriving system provide potential solutions for more reliable autonomous driving.

The above is the detailed content of Open source! V2Xverse: handed over and released the first simulation platform and end-to-end model for V2X. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Recommended: Excellent JS open source face detection and recognition project Recommended: Excellent JS open source face detection and recognition project Apr 03, 2024 am 11:55 AM

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages ​​and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

See all articles