Table of Contents
New record for single-chip training of large AI models
How does "Giant Core" defeat GPU?
Home Technology peripherals AI The world's largest AI chip breaks the record for single-device training of large models, Cerebras wants to 'kill” GPUs

The world's largest AI chip breaks the record for single-device training of large models, Cerebras wants to 'kill” GPUs

Apr 25, 2023 pm 03:34 PM
chip ai train

This article is reproduced from Lei Feng.com. If you need to reprint, please go to the official website of Lei Feng.com to apply for authorization.

Cerebras, a company famous for creating the world's largest accelerator chip CS-2 Wafer Scale Engine, announced yesterday that they have taken an important step in using "giant cores" for artificial intelligence training. The company has trained the world's largest NLP (natural language processing) AI model on a single chip.

The model has 2 billion parameters and is trained on the CS-2 chip. The world's largest accelerator chip uses a 7nm process and is etched from a square wafer. It is hundreds of times larger than mainstream chips and has a power of 15KW. It integrates 2.6 trillion 7nm transistors, packages 850,000 cores and 40GB of memory.

世界超大AI芯片打破单设备训练大模型记录 ,Cerebras要「杀死」GPU

Figure 1 CS-2 Wafer Scale Engine chip

New record for single-chip training of large AI models

The development of NLP models is an important area in artificial intelligence. Using NLP models, artificial intelligence can "understand" the meaning of text and take corresponding actions. OpenAI's DALL.E model is a typical NLP model. This model can convert text information input by users into image output.

For example, when the user enters "avocado-shaped armchair", AI will automatically generate several images corresponding to this sentence.

世界超大AI芯片打破单设备训练大模型记录 ,Cerebras要「杀死」GPU

Picture: The "avocado-shaped armchair" picture generated by AI after receiving the information

More than just In addition, this model can also enable AI to understand complex knowledge such as species, geometry, and historical eras.

But it is not easy to achieve all this. The traditional development of NLP models has extremely high computing power costs and technical thresholds.

In fact, if we only discuss numbers, the 2 billion parameters of the model developed by Cerebras seem a bit mediocre compared with its peers.

The DALL.E model mentioned earlier has 12 billion parameters, and the largest model currently is Gopher, launched by DeepMind at the end of last year, with 280 billion parameters.

But apart from the staggering numbers, the NLP developed by Cerebras has a huge breakthrough: it reduces the difficulty of developing NLP models.

How does "Giant Core" defeat GPU?

According to the traditional process, developing NLP models requires developers to divide huge NLP models into several functional parts and spread their workload across hundreds or thousands of graphics processing units.

Thousands of graphics processing units mean huge costs for manufacturers.

Technical difficulties also make manufacturers miserable.

The slicing model is a custom problem. Each neural network, the specifications of each GPU, and the network that connects (or interconnects) them together are unique and are not portable across systems.

Manufacturers must consider all these factors clearly before the first training.

This work is extremely complex and sometimes takes several months to complete.

Cerebras said this is "one of the most painful aspects" of NLP model training. Only a handful of companies have the necessary resources and expertise to develop NLP. For other companies in the AI ​​industry, NLP training is too expensive, time-consuming, and unavailable.

But if a single chip can support a model with 2 billion parameters, it means that there is no need to use massive GPUs to spread the workload of training the model. This can save manufacturers thousands of GPU training costs and related hardware and scaling requirements. It also saves vendors from having to go through the pain of slicing up models and distributing their workloads across thousands of GPUs.

Cerebras is not just obsessed with numbers. To evaluate the quality of a model, the number of parameters is not the only criterion.

Rather than hope that the model born on the "giant core" will be "hard-working", Cerebras hopes that the model will be "smart".

The reason why Cerebras can achieve explosive growth in the number of parameters is because it uses weighted flow technology. This technology decouples the computational and memory footprints and allows memory to be expanded to be large enough to store any number of parameters that increase in AI workloads.

Thanks to this breakthrough, the time to set up a model has been reduced from months to minutes. And developers can switch between models such as GPT-J and GPT-Neo with "just a few keystrokes." This makes NLP development easier.

This has brought about new changes in the field of NLP.

As Dan Olds, Chief Research Officer of Intersect360 Research, commented on Cerebras’ achievements: “Cerebras’ ability to bring large language models to the masses in a cost-effective, accessible way opens up an exciting new space for artificial intelligence. ”

The above is the detailed content of The world's largest AI chip breaks the record for single-device training of large models, Cerebras wants to 'kill” GPUs. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
2 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

How to achieve the effect of high input elements but high text at the bottom? How to achieve the effect of high input elements but high text at the bottom? Apr 04, 2025 pm 10:27 PM

How to achieve the height of the input element is very high but the text is located at the bottom. In front-end development, you often encounter some style adjustment requirements, such as setting a height...

How to correctly display the locally installed 'Jingnan Mai Round Body' on the web page? How to correctly display the locally installed 'Jingnan Mai Round Body' on the web page? Apr 05, 2025 pm 10:33 PM

Using locally installed font files in web pages Recently, I downloaded a free font from the internet and successfully installed it into my system. Now...

How to select a child element with the first class name item through CSS? How to select a child element with the first class name item through CSS? Apr 05, 2025 pm 11:24 PM

When the number of elements is not fixed, how to select the first child element of the specified class name through CSS. When processing HTML structure, you often encounter different elements...

Where to get the material for H5 page production Where to get the material for H5 page production Apr 05, 2025 pm 11:33 PM

The main sources of H5 page materials are: 1. Professional material website (paid, high quality, clear copyright); 2. Homemade material (high uniqueness, but time-consuming); 3. Open source material library (free, need to be carefully screened); 4. Picture/video website (copyright verified is required). In addition, unified material style, size adaptation, compression processing, and copyright protection are key points that need to be paid attention to.

Does H5 page production require continuous maintenance? Does H5 page production require continuous maintenance? Apr 05, 2025 pm 11:27 PM

The H5 page needs to be maintained continuously, because of factors such as code vulnerabilities, browser compatibility, performance optimization, security updates and user experience improvements. Effective maintenance methods include establishing a complete testing system, using version control tools, regularly monitoring page performance, collecting user feedback and formulating maintenance plans.

Setting flex: 1 1 0 What is the difference between setting flex-basis and not setting flex-basis? Setting flex: 1 1 0 What is the difference between setting flex-basis and not setting flex-basis? Apr 05, 2025 am 09:39 AM

The difference between flex:110 in Flex layout and flex-basis not set In Flex layout, how to set flex...

How to keep the text at the bottom while the input box height increases? How to keep the text at the bottom while the input box height increases? Apr 05, 2025 pm 02:12 PM

How to keep the text at the bottom while the input box height increases? During the development process, we often encounter the need to adjust the input box height, and at the same time hope...

How to use CSS and Flexbox to implement responsive layout of images and text at different screen sizes? How to use CSS and Flexbox to implement responsive layout of images and text at different screen sizes? Apr 05, 2025 pm 06:06 PM

Implementing responsive layouts using CSS When we want to implement layout changes under different screen sizes in web design, CSS...

See all articles