Tesla uses artificial intelligence to improve autonomous driving
Tesla said at an investor conference last week that its self-driving capabilities will be significantly improved once the Dojo supercomputer joins its high-performance computing infrastructure.
Ashok Elluswamy, director of Tesla Autopilot software, said during a speech at the Investor Day meeting that running FSD (Full Self-Driving) Software-based Tesla vehicles (currently around 400,000 customers) will be able to make smarter self-driving decisions through hardware upgrades, which will improve overall artificial intelligence (AI) capabilities.
The company currently has an artificial intelligence system that collects visual data from eight cameras on the vehicle in real time and generates a 3D output that identifies obstacles and their movement, vehicles, roads and traffic traffic lights and modeling tasks that help cars make decisions.
Tesla mines its car network for more visual data and feeds it into training models. Training models to continuously learn to solve new problems helps AI better understand patterns on the road. Through FSD software upgrades, new knowledge is input into the car.
“If we run and repeat this process, it gets better and better,” Elluswamy said. “The solution to scalable FSD is to get the architecture, data and computation just right, We have assembled a world-class team to execute this work. They are bringing these three efforts to the forefront.”
It’s not all plain sailing for FSD, software glitch forces Tesla recall purchased more than 360,000 vehicles. The company provided a software fix via an over-the-air update. Tesla customers can purchase FSD starting at $99 per month. Some customers with older Tesla models also need to pay extra to install FSD computers. Elluswamy claims that Teslas with FSD are still five to six times safer than the U.S. national average.
#Elluswamy said: “As we improve the safety, reliability and comfort of our systems, they can unlock driverless operation, enabling new ways of using cars and beyond today’s How to use it."
Today, the company runs its AI systems on 14,000 GPUs in its data centers and can take advantage of 30 petabytes of video cache, which is growing to 200 petabytes. About 4,000 GPUs are used for automatic labeling, and the remaining 10,000 GPUs are used for artificial intelligence data training.
“Once we bring Dojo (our training computer) into this space, all of that will increase significantly,” Elluswamy said.
The Dojo system is based on Tesla’s self-developed D1 chip, which can provide 22.6 trillion FP32 performance. It has 50 billion transistors and 10TBps of on-chip bandwidth, as well as 4TBps of off-chip bandwidth.
A set of D1 chips will be housed in a high-density ExaPOD cabinet, which will deliver 1.1 exabytes of BFP16 and CFP8 performance. Tesla’s on-board FSB computer can provide 150 teraflops of performance and is mainly used for inference.
Ganesh Venkataraman, senior director of hardware at Tesla, said in a speech at last year’s Hot Chips conference that Tesla made the D1 chip because of the existing GPU and CPU expansion capabilities. defect.
Venkataraman said: "We noticed a lot of bottlenecks. First on the inference side, which is why we do FSD computers. Then we started noticing similar training scale issues, understanding the work After measuring... we can optimize our system based on output needs."
In the early days, Tesla's AI system relied on a single camera and single frame video, and then in the autonomous car Splicing is done in post-processing of the planning system.
"It was very fragile and didn't lead to significant success," Elluswamy said.
######### Over the past few years, Tesla has transformed into a "multi-camera video world." Each vehicle has eight cameras that feed visual information into the AI system, which then generates a 3D output space. AI makes decisions about the presence of obstacles, their movement, lanes, roads and traffic lights, etc. ######Task modeling goes beyond computer vision and uses techniques used in artificial intelligence systems such as ChatGPT, including transformers, attention modules, and autoregression of tokens Modeling.
#Elluswamy said: "With such an end-to-end solution to the perception system, we really remove the fragile post-processing step and provide high-quality output to the planning system. Even if Even the planning system is not static. It is now starting to use more and more artificial intelligence systems to solve this problem."
Autonomous vehicles need to respond quickly to make smooth, real-time decisions. Safe decision-making. Elluswamy gave the example of a 50 millisecond response time, where a self-driving car can make driving decisions after interacting with the surrounding environment (including pedestrians and traffic lights).
This is a lot of data, and in traditional computing, "each piece of data requires 10 milliseconds of calculation time, which can easily exceed 1,000 milliseconds. This is unacceptable." "But with AI, we package all of this into 50 milliseconds of computation so it can run in real time," Elluswamy said.
Tesla is augmenting its raw data by collecting vehicle data on different road conditions and traffic trends around the world. Tesla uses algorithms to reconstruct lanes, road boundaries, curbs, crosswalks and other images, which are then used as a basis to help the car navigate.
#This is achieved by collecting various fragments of different cars in the fleet and combining all the fragments into a unified representation of the world around the car," Elluswamy said.
As more data is input into the system, the training model is continuously reconstructed. To train the networks, Tesla built a complex automated labeling pipeline on the collected data, ran computational algorithms on it, and then generated labels to train these networks.
##############################################################"Once we have completed rebuilding the foundation, we can build various simulations on top of the foundation to generate an infinite variety of data for training." Tesla has powerful Simulators that can synthesize adversarial weather, lighting conditions, and even the motion of other objects. "Every time we add data, performance improves."######
The above is the detailed content of Tesla uses artificial intelligence to improve autonomous driving. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



This site reported on June 27 that Jianying is a video editing software developed by FaceMeng Technology, a subsidiary of ByteDance. It relies on the Douyin platform and basically produces short video content for users of the platform. It is compatible with iOS, Android, and Windows. , MacOS and other operating systems. Jianying officially announced the upgrade of its membership system and launched a new SVIP, which includes a variety of AI black technologies, such as intelligent translation, intelligent highlighting, intelligent packaging, digital human synthesis, etc. In terms of price, the monthly fee for clipping SVIP is 79 yuan, the annual fee is 599 yuan (note on this site: equivalent to 49.9 yuan per month), the continuous monthly subscription is 59 yuan per month, and the continuous annual subscription is 499 yuan per year (equivalent to 41.6 yuan per month) . In addition, the cut official also stated that in order to improve the user experience, those who have subscribed to the original VIP

Improve developer productivity, efficiency, and accuracy by incorporating retrieval-enhanced generation and semantic memory into AI coding assistants. Translated from EnhancingAICodingAssistantswithContextUsingRAGandSEM-RAG, author JanakiramMSV. While basic AI programming assistants are naturally helpful, they often fail to provide the most relevant and correct code suggestions because they rely on a general understanding of the software language and the most common patterns of writing software. The code generated by these coding assistants is suitable for solving the problems they are responsible for solving, but often does not conform to the coding standards, conventions and styles of the individual teams. This often results in suggestions that need to be modified or refined in order for the code to be accepted into the application

Large Language Models (LLMs) are trained on huge text databases, where they acquire large amounts of real-world knowledge. This knowledge is embedded into their parameters and can then be used when needed. The knowledge of these models is "reified" at the end of training. At the end of pre-training, the model actually stops learning. Align or fine-tune the model to learn how to leverage this knowledge and respond more naturally to user questions. But sometimes model knowledge is not enough, and although the model can access external content through RAG, it is considered beneficial to adapt the model to new domains through fine-tuning. This fine-tuning is performed using input from human annotators or other LLM creations, where the model encounters additional real-world knowledge and integrates it

According to news from this website on July 24, Tesla CEO Elon Musk (Elon Musk) stated in today’s earnings conference call that the company is about to complete the largest artificial intelligence training cluster to date, which will be equipped with 2 Thousands of NVIDIA H100 GPUs. Musk also told investors on the company's earnings call that Tesla would work on developing its Dojo supercomputer because GPUs from Nvidia are expensive. This site translated part of Musk's speech as follows: The road to competing with NVIDIA through Dojo is difficult, but I think we have no choice. We are now over-reliant on NVIDIA. From NVIDIA's perspective, they will inevitably increase the price of GPUs to a level that the market can bear, but

Editor |ScienceAI Question Answering (QA) data set plays a vital role in promoting natural language processing (NLP) research. High-quality QA data sets can not only be used to fine-tune models, but also effectively evaluate the capabilities of large language models (LLM), especially the ability to understand and reason about scientific knowledge. Although there are currently many scientific QA data sets covering medicine, chemistry, biology and other fields, these data sets still have some shortcomings. First, the data form is relatively simple, most of which are multiple-choice questions. They are easy to evaluate, but limit the model's answer selection range and cannot fully test the model's ability to answer scientific questions. In contrast, open-ended Q&A

Editor | KX In the field of drug research and development, accurately and effectively predicting the binding affinity of proteins and ligands is crucial for drug screening and optimization. However, current studies do not take into account the important role of molecular surface information in protein-ligand interactions. Based on this, researchers from Xiamen University proposed a novel multi-modal feature extraction (MFE) framework, which for the first time combines information on protein surface, 3D structure and sequence, and uses a cross-attention mechanism to compare different modalities. feature alignment. Experimental results demonstrate that this method achieves state-of-the-art performance in predicting protein-ligand binding affinities. Furthermore, ablation studies demonstrate the effectiveness and necessity of protein surface information and multimodal feature alignment within this framework. Related research begins with "S

According to news from this site on August 1, SK Hynix released a blog post today (August 1), announcing that it will attend the Global Semiconductor Memory Summit FMS2024 to be held in Santa Clara, California, USA from August 6 to 8, showcasing many new technologies. generation product. Introduction to the Future Memory and Storage Summit (FutureMemoryandStorage), formerly the Flash Memory Summit (FlashMemorySummit) mainly for NAND suppliers, in the context of increasing attention to artificial intelligence technology, this year was renamed the Future Memory and Storage Summit (FutureMemoryandStorage) to invite DRAM and storage vendors and many more players. New product SK hynix launched last year

According to news from this website on July 5, GlobalFoundries issued a press release on July 1 this year, announcing the acquisition of Tagore Technology’s power gallium nitride (GaN) technology and intellectual property portfolio, hoping to expand its market share in automobiles and the Internet of Things. and artificial intelligence data center application areas to explore higher efficiency and better performance. As technologies such as generative AI continue to develop in the digital world, gallium nitride (GaN) has become a key solution for sustainable and efficient power management, especially in data centers. This website quoted the official announcement that during this acquisition, Tagore Technology’s engineering team will join GLOBALFOUNDRIES to further develop gallium nitride technology. G
