Home Technology peripherals AI Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency

Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency

Apr 12, 2023 am 09:43 AM
ai chatgpt

After years of development, the DALL-E and GPT-3 generative AI systems launched by OpenAI have become popular all over the world and currently highlight their amazing application potential. However, there is a problem with this explosion of generative AI: every time DALL-E creates an image or GPT-3 predicts the next word, it requires multiple inference calculations, thus taking up a lot of resources and Consumes more electricity. Current GPU and CPU architectures cannot operate efficiently to meet the imminent computing demands, creating huge challenges for hyperscale data center operators.

Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency

Research institutions predict that data centers have become the world’s largest energy consumers, accounting for 3% of total electricity consumption in 2017, rising to 4.5% in 2025. %. Taking China as an example, the electricity consumption of data centers operating nationwide is expected to exceed 400 billion kWh in 2030, accounting for 4% of the country's total electricity consumption.

Cloud computing providers also recognize that their data centers use large amounts of electricity and have taken steps to improve efficiency, such as building and operating data centers in the Arctic to take advantage of renewable energy and natural cooling conditions. However, this is not enough to meet the explosive growth of AI applications.

Lawrence Berkeley National Laboratory in the United States found in research that improvements in data center efficiency have been controlling the growth of energy consumption over the past 20 years, but research shows that current energy efficiency measures may not be enough to meet the needs of future data centers. needs, therefore a better approach is needed.

Data transmission is a fatal bottleneck

The root of efficiency lies in the way GPU and CPU work, especially when running AI inference models and training models. Many people understand "beyond Moore's Law" and the physical limitations of packing more transistors on larger chip sizes. More advanced chips are helping to solve these challenges, but current solutions have a critical weakness in AI inference: the significantly reduced speed at which data can be transferred in random-access memory.

Traditionally, it has been cheaper to separate the processor and memory chips, and for years processor clock speed has been a key limiting factor in computer performance. Today, what's holding back progress is the interconnect between chips.

Jeff Shainline, a researcher at the National Institute of Standards and Technology (NIST), explained: "When memory and processor are separated, the communication link connecting the two domains becomes the main bottleneck of the system." Professor Jack Dongarra, a researcher at Oak Ridge National Laboratory in the United States, said succinctly: "When we look at the performance of today's computers, we find that data transmission is the fatal bottleneck."

AI inference vs.AI training

AI systems use different types of calculations when training an AI model compared to using an AI model to make predictions. AI training loads tens of thousands of image or text samples into a Transformer-based model as a reference, and then starts processing. Thousands of cores in a GPU process large, rich data sets such as images or videos very efficiently, and if you need results faster, more cloud-based GPUs can be rented.

Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency

Although AI inference requires less energy to perform calculations, in auto-completion by hundreds of millions of users, a lot of calculations and predictions are required to decide which word is next What, this consumes more energy than long-term training.

For example, Facebook’s AI systems observe trillions of inferences in its data centers every day, a number that has more than doubled in the past three years. Research has found that running language translation inference on a large language model (LLM) consumes two to three times more energy than initial training.

Surge in demand tests computing efficiency

ChatGPT became popular around the world at the end of last year, and GPT-4 is even more impressive. If more energy-efficient methods can be adopted, AI inference can be extended to a wider range of devices and create new ways of computing.

For example, Microsoft’s Hybrid Loop is designed to build AI experiences that dynamically leverage cloud computing and edge devices. This allows developers to make late-stage decisions while running AI inference on the Azure cloud platform, local client computers, or mobile devices. Bind decisions to maximize efficiency. Facebook introduced AutoScale to help users efficiently decide where to compute inferences at runtime.

In order to improve efficiency, it is necessary to overcome the obstacles that hinder the development of AI and find effective methods.

Sampling and pipelining can speed up deep learning by reducing the amount of data processed. SALIENT (for Sampling, Slicing, and Data Movement) is a new approach developed by researchers at MIT and IBM to address critical bottlenecks. This approach can significantly reduce the need to run neural networks on large datasets containing 100 million nodes and 1 billion edges. But it also affects accuracy and precision—which is acceptable for selecting the next social post to display, but not if trying to identify unsafe conditions on a worksite in near real-time.

Tech companies such as Apple, Nvidia, Intel, and AMD have announced the integration of dedicated AI engines into processors, and AWS is even developing a new Inferentia 2 processor. But these solutions still use traditional von Neumann processor architecture, integrated SRAM and external DRAM memory - all of which require more power to move data in and out of memory.

In-memory computing may be the solution

In addition, researchers have discovered another way to break the "memory wall", which is to bring computing closer Memory.

The memory wall refers to the physical barrier that limits the speed of data entering and exiting the memory. This is a basic limitation of traditional architecture. In-memory computing (IMC) solves this challenge by running AI matrix calculations directly in the memory module, avoiding the overhead of sending data over the memory bus.

IMC is suitable for AI inference because it involves a relatively static but large weighted data set that can be accessed repeatedly. While there is always some data input and output, AI eliminates much of the energy transfer expense and latency of data movement by keeping data in the same physical unit so it can be efficiently used and reused for multiple calculations.

This approach improves scalability because it works well with chip designs. With the new chip, AI inference technology can be tested on developers' computers and then deployed to production environments through data centers. Data centers can use a large fleet of equipment with many chip processors to efficiently run enterprise-level AI models.

Over time, IMC is expected to become the dominant architecture for AI inference use cases. This makes perfect sense when users are dealing with massive data sets and trillions of calculations. Because no more resources are wasted transferring data between memory walls, and this approach can be easily scaled to meet long-term needs.

Summary:

The AI ​​industry is now at an exciting turning point. Technological advances in generative AI, image recognition, and data analytics are revealing unique connections and uses for machine learning, but first a technology solution that can meet this need needs to be built. Because according to Gartner’s predictions, unless more sustainable options are available now, AI will consume more energy than human activities by 2025. Need to figure out a better way before this happens!

The above is the detailed content of Cold thoughts under the ChatGPT craze: AI energy consumption in 2025 may exceed that of humans, and AI computing needs to improve quality and efficiency. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Chat Commands and How to Use Them
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to implement file sorting by debian readdir How to implement file sorting by debian readdir Apr 13, 2025 am 09:06 AM

In Debian systems, the readdir function is used to read directory contents, but the order in which it returns is not predefined. To sort files in a directory, you need to read all files first, and then sort them using the qsort function. The following code demonstrates how to sort directory files using readdir and qsort in Debian system: #include#include#include#include#include//Custom comparison function, used for qsortintcompare(constvoid*a,constvoid*b){returnstrcmp(*(

How to set the Debian Apache log level How to set the Debian Apache log level Apr 13, 2025 am 08:33 AM

This article describes how to adjust the logging level of the ApacheWeb server in the Debian system. By modifying the configuration file, you can control the verbose level of log information recorded by Apache. Method 1: Modify the main configuration file to locate the configuration file: The configuration file of Apache2.x is usually located in the /etc/apache2/ directory. The file name may be apache2.conf or httpd.conf, depending on your installation method. Edit configuration file: Open configuration file with root permissions using a text editor (such as nano): sudonano/etc/apache2/apache2.conf

How to optimize the performance of debian readdir How to optimize the performance of debian readdir Apr 13, 2025 am 08:48 AM

In Debian systems, readdir system calls are used to read directory contents. If its performance is not good, try the following optimization strategy: Simplify the number of directory files: Split large directories into multiple small directories as much as possible, reducing the number of items processed per readdir call. Enable directory content caching: build a cache mechanism, update the cache regularly or when directory content changes, and reduce frequent calls to readdir. Memory caches (such as Memcached or Redis) or local caches (such as files or databases) can be considered. Adopt efficient data structure: If you implement directory traversal by yourself, select more efficient data structures (such as hash tables instead of linear search) to store and access directory information

Debian mail server firewall configuration tips Debian mail server firewall configuration tips Apr 13, 2025 am 11:42 AM

Configuring a Debian mail server's firewall is an important step in ensuring server security. The following are several commonly used firewall configuration methods, including the use of iptables and firewalld. Use iptables to configure firewall to install iptables (if not already installed): sudoapt-getupdatesudoapt-getinstalliptablesView current iptables rules: sudoiptables-L configuration

Debian mail server SSL certificate installation method Debian mail server SSL certificate installation method Apr 13, 2025 am 11:39 AM

The steps to install an SSL certificate on the Debian mail server are as follows: 1. Install the OpenSSL toolkit First, make sure that the OpenSSL toolkit is already installed on your system. If not installed, you can use the following command to install: sudoapt-getupdatesudoapt-getinstallopenssl2. Generate private key and certificate request Next, use OpenSSL to generate a 2048-bit RSA private key and a certificate request (CSR): openss

How debian readdir integrates with other tools How debian readdir integrates with other tools Apr 13, 2025 am 09:42 AM

The readdir function in the Debian system is a system call used to read directory contents and is often used in C programming. This article will explain how to integrate readdir with other tools to enhance its functionality. Method 1: Combining C language program and pipeline First, write a C program to call the readdir function and output the result: #include#include#include#includeintmain(intargc,char*argv[]){DIR*dir;structdirent*entry;if(argc!=2){

How Debian OpenSSL prevents man-in-the-middle attacks How Debian OpenSSL prevents man-in-the-middle attacks Apr 13, 2025 am 10:30 AM

In Debian systems, OpenSSL is an important library for encryption, decryption and certificate management. To prevent a man-in-the-middle attack (MITM), the following measures can be taken: Use HTTPS: Ensure that all network requests use the HTTPS protocol instead of HTTP. HTTPS uses TLS (Transport Layer Security Protocol) to encrypt communication data to ensure that the data is not stolen or tampered during transmission. Verify server certificate: Manually verify the server certificate on the client to ensure it is trustworthy. The server can be manually verified through the delegate method of URLSession

How to learn Debian syslog How to learn Debian syslog Apr 13, 2025 am 11:51 AM

This guide will guide you to learn how to use Syslog in Debian systems. Syslog is a key service in Linux systems for logging system and application log messages. It helps administrators monitor and analyze system activity to quickly identify and resolve problems. 1. Basic knowledge of Syslog The core functions of Syslog include: centrally collecting and managing log messages; supporting multiple log output formats and target locations (such as files or networks); providing real-time log viewing and filtering functions. 2. Install and configure Syslog (using Rsyslog) The Debian system uses Rsyslog by default. You can install it with the following command: sudoaptupdatesud

See all articles