While the world is still obsessed with the NVIDIA H100 chip and is buying it like crazy to meet the growing demand for AI computing power, on Monday local time, NVIDIA quietly launched the latest AI chip H200 for the training of large AI models,Compared with its previous generation product H100, the performance of H200 has been improved by about 60% to 90%.
H200 is an upgraded version of NVIDIA H100. It is also based on the Hopper architecture like H100. The main upgrade includes 141GB of HBM3e video memory, and the video memory bandwidth has increased from 3.35TB/s of H100 to 4.8TB/s. According to Nvidia’s official website, H200 is also the company’s first chip to use HBM3e memory. This memory is faster and has larger capacity, so it is more suitable for large language models. NVIDIA said: "With HBM3e, NVIDIA H200 delivers 141GB of memory at 4.8TB per second, nearly twice the capacity and 2.4 times the bandwidth compared to the A100." According to the official pictures,
The output speed of H200 on large models Llama 2 and GPT-3.5 is 1.9 times and 1.6 times that of H100 respectively, and its speed in high-performance computing HPC is even higher. 110 times faster than a dual-core x86 CPU. H200 is expected to be delivered starting in the second quarter of next year, and Nvidia has not yet announced its price.
However, like H100 and other AI chips, NVIDIA will not supply H200 chips to Chinese manufacturers.
On October 17 this year, the U.S. Department of Commerce issued new export control regulations for chips, especially stricter controls on high-computing power AI chips. One of the very important new regulations is to adjust the restricted standards for advanced computing chips and set a new "performance density threshold" as a parameter. According to new regulations, NVIDIA China's "special edition" H800 and A800 have been restricted from exporting to China.
The lack of the most advanced AI chips will bring some challenges to the development of the domestic AI industry in the short term, but it also contains opportunities:First of all, there are
opportunities for replacing domestic computing power chips. For example, Baidu recently ordered 1,600 pieces of Huawei Ascend 910B chips; secondly, due to the mismatch between supply and demand, the price of computing power rental has increased, which will Good for computing power leasing companies. On the 14th, Huina Technology announced that the computing power service charges for high-performance computing servers embedded with NVIDIA A100 chips will be increased by 100%. Finally, advanced packaging technology represented by Chiplet can solve the problem to a certain extent. The problem of insufficient production capacity of advanced processes is expected to usher in accelerated development. So besides Huawei Shengteng, who else with domestic computing power can carry the banner? What are the companies related to the industry chain? What other directions are expected to benefit indirectly?
Our four major benefit directions and representative companies from the independent control of domestic AI computing powerhave produced a special topic "It is imperative to independently control AI computing power" for friends in need , welcome to scan the QR code below or add the assistant WeChat: hzkyxjyy, get it for free or read it in your circle of friends. After successful addition, all past reports and each future special report can be read for free.
The above is the detailed content of NVIDIA launches new AI chip H200, performance improved by 90%! China's computing power achieves independent breakthrough!. For more information, please follow other related articles on the PHP Chinese website!