


Next-generation AI chip performance doubled? New technology could mimic the human brain to save energy…
According to reports, a research team led by Professor Hussam Amrouch of the Technical University of Munich (TUM) has developed an architecture that can be used for artificial intelligence and is twice as powerful as similar in-memory computing methods.
The latest research results have been recently published in the magazine "Nature". It is said that the innovative new chip technology integrates data storage and processing functions, greatly improving efficiency and performance. The chips, inspired by the human brain, are expected to be commercially available within three to five years and will require interdisciplinary collaboration to meet industry safety standards.
It is reported that the Amrouch team applied a new computing model using a special circuit called a ferroelectric field-effect transistor (FeFET). Within a few years, this may prove applicable to generative artificial intelligence, deep learning algorithms, and robotics applications.
Their basic idea is simple: In the past, chips were only used for calculations on transistors, but now they are also used for data storage. This saves both time and effort. Amrouch said: "As a result, the performance of the chip has also improved."
As human needs continue to improve, future chips must be faster and more efficient than previous ones. Therefore, they cannot heat up quickly. This is essential if they are to support applications such as real-time computing while drones are flying.
Researchers say that such tasks are very complex and energy-consuming for computers
These key requirements for the chip can be summarized by the mathematical parameter TOPS/W: "terahertz operations per second per watt". This can be seen as an important technical indicator of future chips: when one watt (W) of power is provided, how many teraflops of operations (TOP) can the processor perform per second (S)
This new artificial intelligence chip can provide 885 TOPS/W. That makes it twice as powerful as similar AI chips, including Samsung's MRAM chips. The operating speed of currently commonly used CMOS (complementary metal oxide semiconductor) chips is between 10-20 TOPS/W.
Specifically, the researchers borrowed the principles of modern chip architecture from humans. "In the brain, neurons process signals and synapses remember this information," said Amrouch, describing how humans are able to learn and recall complex relationships."
To achieve this, the chip uses "ferroelectric" (FeFET) transistors. This electronic switch has the unique additional property of reversing polarity when voltage is applied, allowing it to store information even in the event of a power outage. Additionally, they are able to store and process data simultaneously
Amrouch believes: "We can now build efficient chipsets for applications such as deep learning, generative artificial intelligence or robotics, for example, where data must be processed where it is generated."
However, professors from the Munich Institute for Integrated Robotics and Machine Intelligence (MIRMI) at the Technical University of Munich believe that it will take several years to achieve this goal. He believes that the first memory chip suitable for practical applications will not be available until three to five years at the earliest
Rewritten content: Quoted from Financial Associated Press
The above is the detailed content of Next-generation AI chip performance doubled? New technology could mimic the human brain to save energy…. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



According to news on November 14, Nvidia officially released the new H200 GPU at the "Supercomputing23" conference on the morning of the 13th local time, and updated the GH200 product line. Among them, the H200 is still built on the existing Hopper H100 architecture. However, more high-bandwidth memory (HBM3e) has been added to better handle the large data sets required to develop and implement artificial intelligence, making the overall performance of running large models improved by 60% to 90% compared to the previous generation H100. The updated GH200 will also power the next generation of AI supercomputers. In 2024, more than 200 exaflops of AI computing power will be online. H200

On June 19, according to media reports in Taiwan, China, Google (Google) has approached MediaTek to cooperate in order to develop the latest server-oriented AI chip, and plans to hand it over to TSMC's 5nm process for foundry, with plans for mass production early next year. According to the report, sources revealed that this cooperation between Google and MediaTek will provide MediaTek with serializer and deserializer (SerDes) solutions and help integrate Google’s self-developed tensor processor (TPU) to help Google create the latest Server AI chips will be more powerful than CPU or GPU architectures. The industry points out that many of Google's current services are related to AI. It has invested in deep learning technology many years ago and found that using GPUs to perform AI calculations is very expensive. Therefore, Google decided to

After the debut of the NVIDIA H200, known as the world's most powerful AI chip, the industry began to look forward to NVIDIA's more powerful B100 chip. At the same time, OpenAI, the most popular AI start-up company this year, has begun to develop a more powerful and complex GPT-5 model. Guotai Junan pointed out in the latest research report that the B100 and GPT5 with boundless performance are expected to be released in 2024, and the major upgrades may release unprecedented productivity. The agency stated that it is optimistic that AI will enter a period of rapid development and its visibility will continue until 2024. Compared with previous generations of products, how powerful are B100 and GPT-5? Nvidia and OpenAI have already given a preview: B100 may be more than 4 times faster than H100, and GPT-5 may achieve super

Today, there are many emerging technology trends serving businesses. As a businessman, it is very necessary to be aware of new trends as it can be a boost to the company. With this information, we can easily upgrade our company and make it easy for potential customers to contact us. Additionally, the latest technology trends enable businesses to stay relevant in the technology world. From 3D printing to eliminating hackers through cybersecurity, it’s critical to adopt these technology trends so your business can grow and retain customers. Here are the top 5 new technology trends in 2022 that can help your business grow significantly. 1. Artificial Intelligence and Machine Learning Although artificial intelligence became popular a few years ago, it is not slowing down. It continues to evolve and become top technology in 2022. For example, today some popular

In 1978, Stuart Marson and others from the University of California established the world's first CADD commercial company and pioneered the development of a chemical reaction and database retrieval system. Since then, computer-aided drug design (CADD) has entered an era of rapid development and has become one of the important means for pharmaceutical companies to conduct drug research and development, bringing revolutionary upgrades to this field. On October 5, 1981, Fortune magazine published a cover article titled "The Next Industrial Revolution: Merck Designs Drugs Through Computers," officially announcing the advent of CADD technology. In 1996, the first drug carbonic anhydrase inhibitor developed based on SBDD (structure-based drug design) was successfully launched on the market. CADD was widely used in drug research and development.

KL730's progress in energy efficiency has solved the biggest bottleneck in the implementation of artificial intelligence models - energy costs. Compared with the industry and previous Nerner chips, the KL730 chip has increased by 3 to 4 times. The KL730 chip supports the most advanced lightweight GPT large Language models, such as nanoGPT, and provide effective computing power of 0.35-4 tera per second. AI company Kneron today announced the release of the KL730 chip, which integrates automotive-grade NPU and image signal processing (ISP) to bring safe and low-energy AI The capabilities are empowered in various application scenarios such as edge servers, smart homes, and automotive assisted driving systems. San Diego-based Kneron is known for its groundbreaking neural processing units (NPUs), and its latest chip, the KL730, aims to achieve

While the world is still obsessed with NVIDIA H100 chips and buying them crazily to meet the growing demand for AI computing power, on Monday local time, NVIDIA quietly launched its latest AI chip H200, which is used for training large AI models. Compared with other The performance of the previous generation products H100 and H200 has been improved by about 60% to 90%. The H200 is an upgraded version of the Nvidia H100. It is also based on the Hopper architecture like the H100. The main upgrade includes 141GB of HBM3e video memory, and the video memory bandwidth has increased from the H100's 3.35TB/s to 4.8TB/s. According to Nvidia’s official website, H200 is also the company’s first chip to use HBM3e memory. This memory is faster and has larger capacity, so it is more suitable for large languages.

According to the original words, it can be rewritten as: (Global TMT August 16, 2023) AI company Kneron, headquartered in San Diego and known for its groundbreaking neural processing units (NPU), announced the release of the KL730 chip. The chip integrates automotive-grade NPU and image signal processing (ISP), and provides safe and low-energy AI capabilities to various application scenarios such as edge servers, smart homes, and automotive assisted driving systems. The KL730 chip has achieved great results in terms of energy efficiency. A breakthrough, compared with previous Nerner chips, its energy efficiency has increased by 3 to 4 times, and is 150% to 200% higher than similar products in major industries. The chip has an effective computing power of 0.35-4 tera per second and can support the most advanced lightweight GPT large
