Home > Technology peripherals > AI > body text

Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency

PHPz
Release: 2023-04-12 09:43:02
forward
1419 people have browsed it

After years of development, the DALL-E and GPT-3 generative AI systems launched by OpenAI have become popular all over the world and currently highlight their amazing application potential. However, there is a problem with this explosion of generative AI: every time DALL-E creates an image or GPT-3 predicts the next word, it requires multiple inference calculations, thus taking up a lot of resources and Consumes more electricity. Current GPU and CPU architectures cannot operate efficiently to meet the imminent computing demands, creating huge challenges for hyperscale data center operators.

Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency

Research institutions predict that data centers have become the world’s largest energy consumers, accounting for 3% of total electricity consumption in 2017, rising to 4.5% in 2025. %. Taking China as an example, the electricity consumption of data centers operating nationwide is expected to exceed 400 billion kWh in 2030, accounting for 4% of the country's total electricity consumption.

Cloud computing providers also recognize that their data centers use large amounts of electricity and have taken steps to improve efficiency, such as building and operating data centers in the Arctic to take advantage of renewable energy and natural cooling conditions. However, this is not enough to meet the explosive growth of AI applications.

Lawrence Berkeley National Laboratory in the United States found in research that improvements in data center efficiency have been controlling the growth of energy consumption over the past 20 years, but research shows that current energy efficiency measures may not be enough to meet the needs of future data centers. needs, therefore a better approach is needed.

Data transmission is a fatal bottleneck

The root of efficiency lies in the way GPU and CPU work, especially when running AI inference models and training models. Many people understand "beyond Moore's Law" and the physical limitations of packing more transistors on larger chip sizes. More advanced chips are helping to solve these challenges, but current solutions have a critical weakness in AI inference: the significantly reduced speed at which data can be transferred in random-access memory.

Traditionally, it has been cheaper to separate the processor and memory chips, and for years processor clock speed has been a key limiting factor in computer performance. Today, what's holding back progress is the interconnect between chips.

Jeff Shainline, a researcher at the National Institute of Standards and Technology (NIST), explained: "When memory and processor are separated, the communication link connecting the two domains becomes the main bottleneck of the system." Professor Jack Dongarra, a researcher at Oak Ridge National Laboratory in the United States, said succinctly: "When we look at the performance of today's computers, we find that data transmission is the fatal bottleneck."

AI inference vs.AI training

AI systems use different types of calculations when training an AI model compared to using an AI model to make predictions. AI training loads tens of thousands of image or text samples into a Transformer-based model as a reference, and then starts processing. Thousands of cores in a GPU process large, rich data sets such as images or videos very efficiently, and if you need results faster, more cloud-based GPUs can be rented.

Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency

Although AI inference requires less energy to perform calculations, in auto-completion by hundreds of millions of users, a lot of calculations and predictions are required to decide which word is next What, this consumes more energy than long-term training.

For example, Facebook’s AI systems observe trillions of inferences in its data centers every day, a number that has more than doubled in the past three years. Research has found that running language translation inference on a large language model (LLM) consumes two to three times more energy than initial training.

Surge in demand tests computing efficiency

ChatGPT became popular around the world at the end of last year, and GPT-4 is even more impressive. If more energy-efficient methods can be adopted, AI inference can be extended to a wider range of devices and create new ways of computing.

For example, Microsoft’s Hybrid Loop is designed to build AI experiences that dynamically leverage cloud computing and edge devices. This allows developers to make late-stage decisions while running AI inference on the Azure cloud platform, local client computers, or mobile devices. Bind decisions to maximize efficiency. Facebook introduced AutoScale to help users efficiently decide where to compute inferences at runtime.

In order to improve efficiency, it is necessary to overcome the obstacles that hinder the development of AI and find effective methods.

Sampling and pipelining can speed up deep learning by reducing the amount of data processed. SALIENT (for Sampling, Slicing, and Data Movement) is a new approach developed by researchers at MIT and IBM to address critical bottlenecks. This approach can significantly reduce the need to run neural networks on large datasets containing 100 million nodes and 1 billion edges. But it also affects accuracy and precision—which is acceptable for selecting the next social post to display, but not if trying to identify unsafe conditions on a worksite in near real-time.

Tech companies such as Apple, Nvidia, Intel, and AMD have announced the integration of dedicated AI engines into processors, and AWS is even developing a new Inferentia 2 processor. But these solutions still use traditional von Neumann processor architecture, integrated SRAM and external DRAM memory - all of which require more power to move data in and out of memory.

In-memory computing may be the solution

In addition, researchers have discovered another way to break the "memory wall", which is to bring computing closer Memory.

The memory wall refers to the physical barrier that limits the speed of data entering and exiting the memory. This is a basic limitation of traditional architecture. In-memory computing (IMC) solves this challenge by running AI matrix calculations directly in the memory module, avoiding the overhead of sending data over the memory bus.

IMC is suitable for AI inference because it involves a relatively static but large weighted data set that can be accessed repeatedly. While there is always some data input and output, AI eliminates much of the energy transfer expense and latency of data movement by keeping data in the same physical unit so it can be efficiently used and reused for multiple calculations.

This approach improves scalability because it works well with chip designs. With the new chip, AI inference technology can be tested on developers' computers and then deployed to production environments through data centers. Data centers can use a large fleet of equipment with many chip processors to efficiently run enterprise-level AI models.

Over time, IMC is expected to become the dominant architecture for AI inference use cases. This makes perfect sense when users are dealing with massive data sets and trillions of calculations. Because no more resources are wasted transferring data between memory walls, and this approach can be easily scaled to meet long-term needs.

Summary:

The AI ​​industry is now at an exciting turning point. Technological advances in generative AI, image recognition, and data analytics are revealing unique connections and uses for machine learning, but first a technology solution that can meet this need needs to be built. Because according to Gartner’s predictions, unless more sustainable options are available now, AI will consume more energy than human activities by 2025. Need to figure out a better way before this happens!

The above is the detailed content of Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template