


MIT researchers use AI to help self-driving cars avoid idling at red lights
What if drivers could schedule their trips precisely so that they went straight through the traffic lights every time?
While this might just be possible for human drivers It happens under particularly lucky circumstances, but it could also be achieved more stably by autonomous vehicles using AI to control their speed.
In a new study, scientists at the Massachusetts Institute of Technology (MIT) demonstrate a machine learning approach. The method can learn to control a fleet of autonomous vehicles to keep traffic flowing as they approach and pass through a signalized intersection.
According to simulation results, their method can reduce fuel consumption and emissions while increasing average vehicle speed. The technology will work best if all cars on the road are autonomous, but even if only 25% of cars use their control algorithms, it will still bring huge fuel and emissions benefits.
"This is a very interesting place to intervene. No one's life is better because they are stuck at an intersection. There is an expectation in many other climate change interventions quality of life difference, so there's a barrier to entry there," said Cathy Wu, senior author of the study. It is reported that he is the Gilbert W. Winslow Career Development Assistant Professor in the Department of Civil and Environmental Engineering and a member of the Institute for Data, Systems and Society (IDSS) and the Laboratory of Information and Decision Systems (LIDS).
LIDS and Vindula Jayawardana, a graduate student in the Department of Electrical Engineering and Computer Science, are both first authors of the study. The research will be presented at the European Control Conference.
Intricate Intersections
While humans may drive through green lights without thinking, intersections can occur in billions of different situations, depending on the number of lanes, the operation of the lights mode, number and speed of vehicles, presence of pedestrians and cyclists, etc.
The typical approach to solving intersection control problems is to use mathematical models to solve a simple, ideal intersection. This looks good on paper, but it's likely not to hold up in the real world, where traffic patterns are often chaotic.
In this regard, Wu and Jayawardana thought from a different angle. They used a model-free technology called deep reinforcement learning to deal with this problem. Reinforcement learning is a trial and error method in which a control algorithm learns to make a series of decisions. When it finds a good sequence it is rewarded. With deep reinforcement learning, algorithms use hypotheses learned by neural networks to find shortcuts to good sequences—even if there are billions of possibilities.
This is useful for solving long-term problems like this. Wu pointed out that the control algorithm must issue more than 500 acceleration instructions to the vehicle over a long period of time. Additionally, she added: "And we have to get the sequence right before we know we've mitigated emissions well and are at the intersection at a good pace." That is, the researchers wanted the system to learn a strategy to reduce fuel consumption and limit the impact on travel time. These goals may be in conflict with each other.
"To reduce travel time, we want the car to drive fast, but to reduce emissions, we want the car to slow down or not move at all. These competing rewards can be very confusing for a learning agent," Wu said.
While addressing the generality of this problem is challenging, the researchers used a technique called reward shaping to work around it. Through reward shaping, they give the system some domain knowledge that it cannot learn on its own. In this case, they punish the system every time the vehicle comes to a complete stop so that it learns to avoid this behavior.
TRAFFIC TESTING
Once the researchers develop an effective control algorithm, they evaluate it using a traffic simulation platform with a single intersection. The control algorithm is applied to a fleet of networked autonomous vehicles that communicate with upcoming traffic lights to receive phase and timing information from the lights and observe their surroundings. This control algorithm tells each vehicle how to accelerate and decelerate.
Their system did not cause any stop-and-go traffic as the vehicle approached the intersection. In the simulation, more cars passed during a single green light phase than a model simulating a human driver. When compared with other optimization methods also aimed at avoiding stop-and-go traffic, their technology resulted in greater fuel consumption and emissions reductions. If every car on the road were autonomous, their control systems could reduce fuel consumption by 18% and CO2 emissions by 25%, while driving 20% faster.
Wu said: “It’s really incredible to have a 20% to 25% reduction in fuel or emissions from one intervention. But what I find interesting, and what I really want to see, is this nonlinearity. scale. If we only control 25% of the vehicles, this gives us the benefit of 50% fuel and emissions reduction. This means we don’t have to wait until we reach 100% autonomous vehicles to benefit from this approach.”
Next, the researchers hope to study the interactive effects between multiple intersections. Additionally, they plan to explore how different intersection settings such as number of lanes, lights, timing, etc. affect travel times, emissions, and fuel consumption. Additionally, they plan to study how their control systems impact safety when autonomous vehicles share the road with human drivers.
While this work is still in its early stages, Wu believes this approach could be more feasible to implement in the near future.
The above is the detailed content of MIT researchers use AI to help self-driving cars avoid idling at red lights. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

To learn more about AIGC, please visit: 51CTOAI.x Community https://www.51cto.com/aigc/Translator|Jingyan Reviewer|Chonglou is different from the traditional question bank that can be seen everywhere on the Internet. These questions It requires thinking outside the box. Large Language Models (LLMs) are increasingly important in the fields of data science, generative artificial intelligence (GenAI), and artificial intelligence. These complex algorithms enhance human skills and drive efficiency and innovation in many industries, becoming the key for companies to remain competitive. LLM has a wide range of applications. It can be used in fields such as natural language processing, text generation, speech recognition and recommendation systems. By learning from large amounts of data, LLM is able to generate text

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

Machine learning is an important branch of artificial intelligence that gives computers the ability to learn from data and improve their capabilities without being explicitly programmed. Machine learning has a wide range of applications in various fields, from image recognition and natural language processing to recommendation systems and fraud detection, and it is changing the way we live. There are many different methods and theories in the field of machine learning, among which the five most influential methods are called the "Five Schools of Machine Learning". The five major schools are the symbolic school, the connectionist school, the evolutionary school, the Bayesian school and the analogy school. 1. Symbolism, also known as symbolism, emphasizes the use of symbols for logical reasoning and expression of knowledge. This school of thought believes that learning is a process of reverse deduction, through existing

The deep reinforcement learning team of the Institute of Automation, Chinese Academy of Sciences, together with Li Auto and others, proposed a new closed-loop planning framework for autonomous driving based on the multi-modal large language model MLLM - PlanAgent. This method takes a bird's-eye view of the scene and graph-based text prompts as input, and uses the multi-modal understanding and common sense reasoning capabilities of the multi-modal large language model to perform hierarchical reasoning from scene understanding to the generation of horizontal and vertical movement instructions, and Further generate the instructions required by the planner. The method is tested on the large-scale and challenging nuPlan benchmark, and experiments show that PlanAgent achieves state-of-the-art (SOTA) performance on both regular and long-tail scenarios. Compared with conventional large language model (LLM) methods, PlanAgent
