Home > Technology peripherals > AI > Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times

Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times

WBOY
Release: 2023-04-16 13:28:05
forward
1230 people have browsed it

The new generation HCC high-performance computing cluster uses the latest generation of Xinghai self-developed servers and is equipped with NVIDIA H800 Tensor Core GPU.

Tencent officials said that the cluster is based on self-developed network and storage architecture, bringing 3.2T ultra-high interconnect bandwidth, TB-level throughput capacity and tens of millions of IOPS. Actual measurement results show that the computing power performance of the new generation cluster is improved by 3 times compared with the previous generation.

Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times

In October last year, Tencent completed the training of the first large-scale AI model with one trillion parameters - the Hunyuan NLP large model. With the same data set, the training time is shortened from 50 days to 11 days. If based on a new generation cluster, the training time will be further reduced to 4 days.

At the computing level, server stand-alone performance is the basis of cluster computing power. The single GPU card of Tencent Cloud's new generation cluster supports output of up to 1979 TFlops of computing power under different precisions.

For large model scenarios, Xingxinghai’s self-developed server adopts a 6U ultra-high-density design, which is 30% higher than the industry’s supported shelf density; using the parallel computing concept, through the integrated design of CPU and GPU nodes, Improve single-point computing power performance to a higher level.

Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times

#At the network level, there are massive data interaction requirements between computing nodes. As the cluster scale expands, communication performance will directly affect training efficiency, requiring maximum collaboration between the network and computing nodes.

Tencent’s self-developed Xingmai high-performance computing network claims to have the industry’s highest 3.2T RDMA communication bandwidth. Actual measurement results show that equipped with the same number of GPUs, the 3.2T Xingmai network has a 20% increase in the overall computing power of the cluster compared to the 1.6T network.

At the same time, Tencent’s self-developed high-performance collective communication library TCCL is integrated into custom-designed solutions. Compared with the industry's open source collective communication library, it optimizes 40% load performance for large model training and eliminates training interruption problems caused by multiple network reasons.

Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times

At the storage level, during large model training, a large number of computing nodes will read a batch of data sets at the same time. It is necessary to shorten the data loading time as much as possible to avoid waiting for computing nodes.

Tencent Cloud’s self-developed storage architecture has terabyte-level throughput capabilities and tens of millions of IOPS, supporting storage needs in different scenarios. COS GooseFS object storage solution and CFS Turbo high-performance file storage solution fully meet the high performance, large throughput and massive storage requirements in large model scenarios.

Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times

In addition, the new generation cluster integrates Tencent Cloud’s self-developed TACO training acceleration engine, which performs a large number of system-level optimizations on network protocols, communication strategies, AI frameworks, and model compilation. Significantly save training tuning and computing power costs.

AngelPTM, the training framework behind Tencent’s Hunyuan large model, has also provided services through Tencent Cloud TACO to help enterprises accelerate the implementation of large models.

Through the large model capabilities and toolbox of Tencent Cloud TI platform, enterprises can conduct fine-tuned training based on industrial scenario data, improve production efficiency, and quickly create and deploy AI applications.

Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times

Relying on the distributed cloud-native governance capabilities, Tencent Cloud Intelligent Computing Platform provides 16 EFLOPS of floating-point computing power.

The above is the detailed content of Tencent releases a new generation of super computing cluster: for large model training, performance increased by 3 times. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template