On March 1, 2024, the 2024 International Autonomous Driving Challenge was officially launched. This challenge is hosted by Shanghai Artificial Intelligence Laboratory and co-organized by a number of domestic and foreign institutions. A number of well-known experts and scholars at home and abroad form the competition guidance and awards committee. This competition aims to provide an in-depth exploration of the tasks and challenges faced by autonomous systems and provide a stage for global participants to showcase technology and innovation. In this competition, we do not set too many restrictions on the participants. For example, multiple teams from the same organization are allowed to participate, all public data sets and pre-training weights are allowed to be used, and one team is allowed to win multiple awards at the same time. There are seven tracks in this competition, and the winner has the opportunity to receive a reward of up to 27,000 US dollars, and also has the opportunity to be invited to submit articles in top international journals. We have also prepared detailed competition guidelines and benchmark models for each track, which can be accessed by clicking on the corresponding link for each track.
Official website of the competition:
https://opendrivelab.com/challenge2024
Main activity:
CVPR 2024 Forum-Workshop on Foundation Models for Autonomous Systems (Seattle, USA)
Related events:
Introduction to the competition topicEnd-to-end automatic driving
Due to the size of the previous data set Limited, inconsistent open- and closed-loop metrics make it difficult to benchmark sensorimotor-driven strategies using real data. In this track, large-scale data will be used to bridge the gap between the two evaluation paradigms, and through BEV abstract modeling in a short time series, efficient open-loop evaluation will be achieved while better aligning with closed-loop evaluation.
World Model
As an abstract spatiotemporal representation of reality, The world model can predict the future state based on the observed current state, and learning the world model will promote the performance of the basic model to a new level. The model needs to predict the point cloud at future moments with only visual input to prove its ability to predict the world.
Occupation grid and motion estimation
Three-dimensional boxes are often insufficient to describe general objects. Inspired by the concept of robotics, perceptual representation can be described as the occupation of a gridded three-dimensional space. predict. In this track, contestants must not only provide a raster representation of the three-dimensional space, but also predict the movement of the raster.
Embodied multi-modal three-dimensional visual positioning
Compared with driving scenes, indoor embodied three-dimensional perception systems face multi-modal input including language instructions, more complex semantic understanding, more diverse object categories and orientations, and very different perception spaces. and needs. Based on this, the competition constructed a first-perspective multi-modal full-scene three-dimensional perception toolkit EmbodiedScan. The goal of this task is to detect the category of the target object and its oriented three-dimensional box given a verbal description of a specific object.
CARLA Self-Driving Challenge
The CARLA Self-Driving Challenge requires vehicles to travel through a set of pre-defined routes. Vehicle driving routes involve complex situations, such as highways, urban areas, residential areas, and rural environments. They also include sunlight, sunset, night, rain, fog, and other light and weather conditions, which provide the possibility for closed-loop evaluation of autonomous driving systems.
Application of large language model in autonomous driving
By introducing language information, The DriveLM data set connects large language models with autonomous driving systems, and ultimately makes decisions by introducing language reasoning capabilities to ensure the interpretability of planning. Taking multi-view images as input information, the model must answer various questions related to driving.
Drive without a map
In the absence of high-definition maps, self-driving cars require a high level of scene understanding, and this track is designed to explore The limits of scenario reasoning capabilities. Taking multi-view images and standard-definition maps as input information, the neural network must not only output the perception results of lanes and traffic elements, but also output the topological relationships between lanes and between lanes and traffic elements.
Schedule
The following times are all Beijing time, please refer to the official website of the competition for details .
Guidance and Awards Committee
Sorted by name strokes; the list is being updated continuously.
Qiao Yu |
Shanghai Artificial Intelligence Laboratory |
Leader Scientist, Assistant to the Director |
Liu Qingshan |
Nanjing University of Posts and Telecommunications |
vice-president |
##杨小康 | Shanghai Jiao Tong University | Executive Dean of the Institute of Artificial Intelligence |
Li Shengbo | Tsinghua University | Secretary of the Party Committee of the Vehicle College, national high-level leading talent, professor |
Zhang Yaqin |
Tsinghua University |
Foreign academician of the Chinese Academy of Engineering, President of the Institute of Intelligent Industry, Chair Professor |
Chen Baoquan |
Peking University |
Deputy Dean of the School of Intelligence, Boya Distinguished Professor |
Xia Huaxia |
Meituan |
Chief Scientist, Vice President |
##高新波 | Chongqing University of Posts and Telecommunications | Deputy Secretary of the Party Committee, President, Professor |
Xue Jianru | Xi'an Jiaotong University | Professor |
https://opendrivelab.com/challenge2024
WeChat communication Group
About us-> Join the communityRegistration link
##Contact emailworkshop-e2e-ad@googlegroups.com
The above is the detailed content of 2024 International Autonomous Driving Challenge Officially Starts. For more information, please follow other related articles on the PHP Chinese website!