


Joining forces with upstream chip manufacturers, ASUS promotes a new ecosystem of AI chips and leads the innovation of smart terminal technology
Recently, at the "Insight 2024 7th China Enterprise Service Annual Conference", some well-known manufacturers, innovative enterprises and typical industry user representatives gathered at the venue to discuss how to seize new opportunities in artificial intelligence and cope with the wave of digital demands. . At the meeting, Shi Qifei, business director of ASUS in North China, gave a keynote speech entitled "Rebirth of Computing Power, Creating a New Ecosystem of AI Chips". He believes that with the popularity of large models, smart terminals will also have "end-side" large models, bringing a new smart ecosystem
Regarding new AI products and new ecology, Shi Qifei also said that ASUS will cooperate with upstream manufacturers such as AMD to jointly explore new possibilities of AI chip technology and strive to better provide basic computing power in the future large computing environment. Lay a solid foundation for the technological revolution in the technology industry, protect it, and continue to promote the innovation and upgrading of intelligent terminal products
In recent years, ASUS has been committed to redefining terminal products such as thin and light notebooks through innovative technological capabilities. This is not groundless. Among the many new products launched this year, one of the most eye-catching is the ASUS AI thin and light notebook. This product is equipped with AMD Ryzen 7 7940H mobile processor and adopts a three-in-one architecture of "CPU GPU AI engine", becoming the first product to integrate the Ryzen AI engine in an X86 processor. Ryzen AI engine adopts AMD XDNA architecture, which naturally adapts to neural network algorithms, making it more flexible and efficient when performing AI inference, and provides stronger performance. In the future, the AI engine will also adapt to more AI applications, ushering in a new era of AI for ASUS notebooks. At this conference, ASUS Fearless 16 2023 equipped with this processor won the "2023 China Smart Terminal AI Innovation Pioneer Award". This is just a microcosm of ASUS's close cooperation with upstream manufacturers to provide powerful AI computing power for smart terminal products. In the future, more ASUS smart terminals will be equipped with AI, support AI, and use AI
As an innovative pioneer and leader in the notebook field, ASUS’s exploration of new ecology goes far beyond this. In order to improve users' visual experience, ASUS took the lead in proposing the "ASUS Good Screen" strategy as early as 2021. All thin and light laptops will be equipped with OLED good screens to provide users with unique visual enjoyment. Last year, ASUS successively won the three major honors of "No. 1 in the OLED notebook market", "No. 1 in the global sales of creative notebooks" and "No. 1 in the global sales of dual-screen notebooks". With the continuous advancement of the good screen strategy, more and more consumers have the opportunity to personally experience the amazing performance of "ASUS Good Screen" in visual color
In addition, ASUS has never stopped showing its hard power in technological innovation on the topic of performance, which is quite avoided in thin and light notebooks. In 2023, ASUS and upstream manufacturers jointly developed the "ASUS Ultra-High Speed System Module" packaging technology. Based on this, the Lingyao X Ultra launched has a power consumption of up to 185W. This level of power consumption has never been seen in thin and light notebooks. , even in high-performance gaming notebooks, it is only possible in high-end series.
ASUS has made important progress in laying out the AI intelligent ecosystem in the future, which has laid a solid foundation for it. It is expected that in 2024, ASUS will launch a number of new PC products including ProArt, Lingyao, Fearless, and Adou. These products will be equipped with new AI processors to provide users with a better and more efficient AI smart experience. Cooperation with upstream manufacturer AMD will further promote the application of AI technology in various fields and scenarios. ASUS is committed to bringing more colorful, smart and convenient smart terminal products to global users, and injecting more vitality into the implementation of AI in terminal devices
The above is the detailed content of Joining forces with upstream chip manufacturers, ASUS promotes a new ecosystem of AI chips and leads the innovation of smart terminal technology. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



According to news on November 14, Nvidia officially released the new H200 GPU at the "Supercomputing23" conference on the morning of the 13th local time, and updated the GH200 product line. Among them, the H200 is still built on the existing Hopper H100 architecture. However, more high-bandwidth memory (HBM3e) has been added to better handle the large data sets required to develop and implement artificial intelligence, making the overall performance of running large models improved by 60% to 90% compared to the previous generation H100. The updated GH200 will also power the next generation of AI supercomputers. In 2024, more than 200 exaflops of AI computing power will be online. H200

On June 19, according to media reports in Taiwan, China, Google (Google) has approached MediaTek to cooperate in order to develop the latest server-oriented AI chip, and plans to hand it over to TSMC's 5nm process for foundry, with plans for mass production early next year. According to the report, sources revealed that this cooperation between Google and MediaTek will provide MediaTek with serializer and deserializer (SerDes) solutions and help integrate Google’s self-developed tensor processor (TPU) to help Google create the latest Server AI chips will be more powerful than CPU or GPU architectures. The industry points out that many of Google's current services are related to AI. It has invested in deep learning technology many years ago and found that using GPUs to perform AI calculations is very expensive. Therefore, Google decided to

After the debut of the NVIDIA H200, known as the world's most powerful AI chip, the industry began to look forward to NVIDIA's more powerful B100 chip. At the same time, OpenAI, the most popular AI start-up company this year, has begun to develop a more powerful and complex GPT-5 model. Guotai Junan pointed out in the latest research report that the B100 and GPT5 with boundless performance are expected to be released in 2024, and the major upgrades may release unprecedented productivity. The agency stated that it is optimistic that AI will enter a period of rapid development and its visibility will continue until 2024. Compared with previous generations of products, how powerful are B100 and GPT-5? Nvidia and OpenAI have already given a preview: B100 may be more than 4 times faster than H100, and GPT-5 may achieve super

KL730's progress in energy efficiency has solved the biggest bottleneck in the implementation of artificial intelligence models - energy costs. Compared with the industry and previous Nerner chips, the KL730 chip has increased by 3 to 4 times. The KL730 chip supports the most advanced lightweight GPT large Language models, such as nanoGPT, and provide effective computing power of 0.35-4 tera per second. AI company Kneron today announced the release of the KL730 chip, which integrates automotive-grade NPU and image signal processing (ISP) to bring safe and low-energy AI The capabilities are empowered in various application scenarios such as edge servers, smart homes, and automotive assisted driving systems. San Diego-based Kneron is known for its groundbreaking neural processing units (NPUs), and its latest chip, the KL730, aims to achieve

While the world is still obsessed with NVIDIA H100 chips and buying them crazily to meet the growing demand for AI computing power, on Monday local time, NVIDIA quietly launched its latest AI chip H200, which is used for training large AI models. Compared with other The performance of the previous generation products H100 and H200 has been improved by about 60% to 90%. The H200 is an upgraded version of the Nvidia H100. It is also based on the Hopper architecture like the H100. The main upgrade includes 141GB of HBM3e video memory, and the video memory bandwidth has increased from the H100's 3.35TB/s to 4.8TB/s. According to Nvidia’s official website, H200 is also the company’s first chip to use HBM3e memory. This memory is faster and has larger capacity, so it is more suitable for large languages.

According to the original words, it can be rewritten as: (Global TMT August 16, 2023) AI company Kneron, headquartered in San Diego and known for its groundbreaking neural processing units (NPU), announced the release of the KL730 chip. The chip integrates automotive-grade NPU and image signal processing (ISP), and provides safe and low-energy AI capabilities to various application scenarios such as edge servers, smart homes, and automotive assisted driving systems. The KL730 chip has achieved great results in terms of energy efficiency. A breakthrough, compared with previous Nerner chips, its energy efficiency has increased by 3 to 4 times, and is 150% to 200% higher than similar products in major industries. The chip has an effective computing power of 0.35-4 tera per second and can support the most advanced lightweight GPT large

Google’s CEO likened the AI revolution to humanity’s use of fire, but now the digital fire that fuels the industry—AI chips—is hard to come by. The new generation of advanced chips that drive AI operations are almost all manufactured by NVIDIA. As ChatGPT explodes out of the circle, the market demand for NVIDIA graphics processing chips (GPUs) far exceeds the supply. "Because there is a shortage, the key is your circle of friends," said Sharon Zhou, co-founder and CEO of Lamini, a startup that helps companies build AI models such as chatbots. "It's like toilet paper during the epidemic." This kind of thing. The situation has limited the computing power that cloud service providers such as Amazon and Microsoft can provide to customers such as OpenAI, the creator of ChatGPT.

Microsoft is developing AI-optimized chips to reduce the cost of training generative AI models, such as the ones that power the OpenAIChatGPT chatbot. The Information recently quoted two people familiar with the matter as saying that Microsoft has been developing a new chipset code-named "Athena" since at least 2019. Employees at Microsoft and OpenAI already have access to the new chips and are using them to test their performance on large language models such as GPT-4. Training large language models requires ingesting and analyzing large amounts of data in order to create new output content for the AI to imitate human conversation. This is a hallmark of generative AI models. This process requires a large number (on the order of tens of thousands) of A
