Development of universal programmable DPU for cloud computing
Focus on the technological development and evolution of data centers. As a general data processor, DPU is not a simple replacement for NIC/SmartNIC, but an integral part of the network infrastructure. Essential change. The DPU itself has the characteristics of universal hierarchical programmability, low-latency network, and unified management and control, so that the DPU is promoting the architectural optimization and reconstruction of the new generation of data centers. As the basic component of general data processing, DPU will offload general data processing tasks originally running in the CPU and GPU, release the computing power of the CPU and GPU, and support the CPU and GPU to achieve greater performance.
##"Cloud Computing Universal Programmable DPU Development White Paper (2023)" The white paper clarifies And analyze the process and current situation of DPU development, and point out which DPU characteristics are the key points to solve the above core problems, thereby promoting the in-depth development of DPU technology and helping to achieve a complete ecological chain construction and industrial implementation.
For DPU technology applications and technical principles, Please refer to the article "DPU technical principles, computing power efficiency and application scenario analysis" and "Mainstream DPU architecture implementation and technology comparison","## DPU Performance Benchmark: Introduction to Evaluation Framework and Test Process (2022)" and "DPU global pattern, The Rise of 5 Domestic Companies (2023)”.
Focuses on analyzing the general programmable features and various application scenarios that DPU needs to have, and also analyzes the limitations of traditional DPU.In recent years, because there is no excellent and mature commercial DPU SoC (System on a chip) solution in the industry, major cloud vendors can only develop their own DPU solutions based on CPU FPGA, resulting in DPU being misunderstood as a fragmented market. , the role and potential of DPU in cloud computing are not correctly understood by the industry. After the "14th Five-Year Plan" clearly pointed out that the construction of new infrastructure should be accelerated, the construction of the Eastern Digital and Western Computing Project and the operator's computing power network came as scheduled. Behind the digital economy, cloud computing is the core computing power base. In cloud computing, DPU has become one of the core components of the infrastructure.
In the era of digital economy, cloud computing continues to enter all walks of life. As the "national team" of cloud computing, China Mobile is increasing its investment to fully support the digital transformation of the government and state-owned enterprises, reduce costs and increase efficiency, and protect the security of state-owned data. In the white paper, Cloud Leopard Intelligent, as the only company invited by China Mobile to participate in the compilation, is the leading domestic DPU chip company and the chip company known in China that can truly achieve high-performance universal programmable DPU SoC. The joint release of this white paper represents the in-depth cooperation between China Mobile and Cloud Leopard Intelligence in the field of DPU, joining forces to contribute to the development of the national cloud infrastructure and DPU.
DPU-centered data center network architectureAs network bandwidth gradually evolves from 25Gbps to 100Gbps, 200Gbps, 400Gbps and even higher bandwidth, traditional data centers have The CPU computing power resources occupied by network data processing are also increasing, and even more than half of them will be consumed in the functions of these infrastructures. Therefore, there is an urgent need for a new type of processor to reduce the consumption of the cloud host CPU. DPU is a general-purpose processor centered on data processing that provides data center infrastructure services. It is the "third main chip" after CPU and GPU. It can offload and accelerate network and storage, and also has basic functions such as security and management. , releasing more computing power resources for customers to use. In cloud computing and data center scenarios, if you need to further improve computing power and maximize infrastructure performance, such as dynamically and flexibly scheduling computing power, network and storage resources, then DPU is necessary and irreplaceable.
At present, most domestic cloud manufacturers still use DPU solutions based on CPU FPGA. These solutions have a certain time advantage in R&D investment, but due to their high power consumption and limited performance, they have not reached the new level. Generation cloud computing requirements. In addition, because FPGA is basically monopolized by the two foreign chip giants, its high price has directly led to high product costs and affected market competitiveness.
DPU SoC products are the ultimate iteration of the former. They require ultra-high heterogeneous chip technology, universal programmability and other features, together with advanced chip technology, to meet more complex, wider and higher-performance applications. need. At present, foreign chip giants and leading cloud service providers have chosen the general DPU SoC product route, because compared to the CPU FPGA solution, DPU SoC has a cost-effective improvement of 4 to 8 times.
Every cloud vendor is looking for the best solution to improve their profits and competitiveness. Because they understand that CPU FPGA is not a long-term solution that can satisfy the new generation of cloud computing, they are all looking forward to a competitive product. Powerful, easy-to-use and cost-effective DPU SoC appears.
The Amazon Cloud (AWS) in the United States not only occupies the highest share of the global cloud computing market, but also was the first to successfully commercialize DPU SoC (AWS calls it Nitro) many years ago. After AWS uses its self-developed DPU SoC, it can earn thousands of dollars more in revenue from selling the computing resources of each server every year. AWS has millions of servers, so the benefits that DPU brings to AWS are huge. The successful application of DPU in AWS has attracted widespread attention in the industry and attracted more and more chip giants to join the DPU track. Nvidia successfully acquired Mellanox, a well-known network chip and equipment company in the industry, for US$6.9 billion in 2020. By integrating Mellanox's network technology, Nvidia quickly launched the BlueField series of DPU SoCs for the global data center market. AMD acquired DPU SoC manufacturer Pensando for US$1.9 billion in 2022. Domestic cloud manufacturers are also seeking technical solutions to evolve from FPGA architecture to general programmable DPU SoC.
It is against this background that China Mobile, China Academy of Information and Communications Technology and Yunbao Intelligence released the "Cloud Computing Universal Programmable DPU Development White Paper (2023)". An in-depth analysis of the development trends of DPU: universal programmability, low-latency network, and unified resource management. At the same time, it also introduces various application scenarios of universal programmable DPU SoC in data centers, operators, heterogeneous computing, etc.
In the construction of domestic data centers, servers are developing from 25G to 100G and higher bandwidth, and the complexity of application deployment continues to increase. It not only needs to support the application management deployment of virtual machines and containers, but also needs to support bare Metal applications. As the core infrastructure component of the data center, DPU must have flexible programming capabilities, high data throughput capabilities, and unified management and control capabilities to meet the current needs of various cloud computing services and data center development.
According to observations from the semiconductor industry, Yunbao Intelligent is currently the chip company known in China that can truly develop high-performance DPU SoC by itself, and its product will also be the first universal programmable DPU SoC chip in China. Not only does it provide data throughput of up to 400G, it is also equipped with a powerful CPU processing unit and works with a variety of programmable data processing engines to achieve hierarchical programmability. According to the description of the white paper, Cloud Leopard Intelligence has mastered and led a number of core technologies in multiple key areas of DPU:
- Programmable high-performance network processing technology
- Programmable low-latency RDMA technology
- DDP (Data Direct Path) data pass-through technology
- Secure computing system
Cloud Leopard Intelligent DPU SoC supports unified operation, maintenance and management of bare metal, virtual machines and containers, provides one-stop solutions such as elastic network and storage, virtualization management and security, greatly improving the cloud The service quality and business flexibility of service providers reduce overall investment and lead the continuous evolution of data centers towards computing and network integration.
China Mobile, as a major cloud service provider supporting the country’s digital economy, gives a clear answer in this white paper. DPU SoC is a key component of cloud computing. The universal programmable DPU SoC can realize the control of data centers. Cost-effective offloading and management of computing, network, and storage resources. It also clearly analyzes the key features that DPU SoC needs to possess: hierarchical programmability, low-latency network, unified management and control, and accelerated offloading to adapt to sustainable development. Cloud vendors are pushing data centers to move towards high efficiency, high scalability, high bandwidth, and high performance. Important technical support for flexibility development. At the same time, it is also the development direction of DPU technology that various cloud vendors are actively researching and exploring.
The above is the detailed content of Development of universal programmable DPU for cloud computing. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



According to news from this site on July 31, technology giant Amazon sued Finnish telecommunications company Nokia in the federal court of Delaware on Tuesday, accusing it of infringing on more than a dozen Amazon patents related to cloud computing technology. 1. Amazon stated in the lawsuit that Nokia abused Amazon Cloud Computing Service (AWS) related technologies, including cloud computing infrastructure, security and performance technologies, to enhance its own cloud service products. Amazon launched AWS in 2006 and its groundbreaking cloud computing technology had been developed since the early 2000s, the complaint said. "Amazon is a pioneer in cloud computing, and now Nokia is using Amazon's patented cloud computing innovations without permission," the complaint reads. Amazon asks court for injunction to block

To achieve effective deployment of C++ cloud applications, best practices include: containerized deployment, using containers such as Docker. Use CI/CD to automate the release process. Use version control to manage code changes. Implement logging and monitoring to track application health. Use automatic scaling to optimize resource utilization. Manage application infrastructure with cloud management services. Use horizontal scaling and vertical scaling to adjust application capacity based on demand.

Golang cloud computing alternatives include: Node.js (lightweight, event-driven), Python (ease of use, data science capabilities), Java (stable, high performance), and Rust (safety, concurrency). Choosing the most appropriate alternative depends on application requirements, ecosystem, team skills, and scalability.

The growth of the three cloud computing giants shows no sign of slowing down until 2024, with Amazon, Microsoft, and Google all generating more revenue in cloud computing than ever before. All three cloud vendors have recently reported earnings, continuing their multi-year strategy of consistent revenue growth. On April 25, both Google and Microsoft announced their results. In the first quarter of Alphabet’s fiscal year 2024, Google Cloud’s revenue was US$9.57 billion, a year-on-year increase of 28%. Microsoft's cloud revenue was $35.1 billion, a year-over-year increase of 23%. On April 30, Amazon Web Services (AWS) reported revenue of US$25 billion, a year-on-year increase of 17%, ranking among the three giants. Cloud computing providers have a lot to be happy about, with the growth rates of the three market leaders over the past

The advantages of integrating PHPRESTAPI with the cloud computing platform: scalability, reliability, and elasticity. Steps: 1. Create a GCP project and service account. 2. Install the GoogleAPIPHP library. 3. Initialize the GCP client library. 4. Develop REST API endpoints. Best practices: use caching, handle errors, limit request rates, use HTTPS. Practical case: Upload files to Google Cloud Storage using Cloud Storage client library.

Java cloud migration involves migrating applications and data to cloud platforms to gain benefits such as scaling, elasticity, and cost optimization. Best practices include: Thoroughly assess migration eligibility and potential challenges. Migrate in stages to reduce risk. Adopt cloud-first principles and build cloud-native applications wherever possible. Use containerization to simplify migration and improve portability. Simplify the migration process with automation. Cloud migration steps cover planning and assessment, preparing the target environment, migrating applications, migrating data, testing and validation, and optimization and monitoring. By following these practices, Java developers can successfully migrate to the cloud and reap the benefits of cloud computing, mitigating risks and ensuring successful migrations through automated and staged migrations.

Golang is economically viable in cloud computing because it compiles directly to native code, is lightweight at runtime, and has excellent concurrency. These factors can lower costs by reducing cloud computing resource requirements, improving performance, and simplifying management.

This article provides guidance on high availability and fault tolerance strategies for Java cloud computing applications, including the following strategies: High availability strategy: Load balancing Auto-scaling Redundant deployment Multi-region persistence Failover Fault tolerance strategy: Retry mechanism Circuit interruption Idempotent operation timeout and callback Bounce error handling practical cases demonstrate the application of these strategies in different scenarios, such as load balancing and auto-scaling to cope with peak traffic, redundant deployment and failover to improve reliability, and retry mechanisms and idempotent operations to prevent data loss. .
