Home > Technology peripherals > It Industry > Accelerating the Cloud: What to Expect When Going Cloud Native

Accelerating the Cloud: What to Expect When Going Cloud Native

Joseph Gordon-Levitt
Release: 2025-02-09 10:37:09
Original
777 people have browsed it

Ampere Cloud Native Platform: The perfect combination of performance, sustainability and cost-effectiveness

This article is the fourth part of Ampere Computing’s “Accelerating Cloud Computing” series, which explores the many benefits of migrating to cloud-native platforms. The previous article has explained the differences between x86 architecture and cloud-native platforms and the investments required for cloud-native migration. This article will focus on the advantages brought by cloud-native platforms.

The advantages of cloud native processors in cloud computing:

  • Elevate performance per rack and per dollar
  • Enhanced predictability and consistency
  • Improve efficiency
  • Optimized scalability
  • Reduce operating costs

Cloud native processor achieves peak performance

Unlike the x86 architecture that carries a large number of legacy features, Ampere cloud native processors are designed to efficiently perform common cloud application tasks. This significantly improves the performance of critical cloud workloads that enterprises rely most on.

Accelerating the Cloud: What to Expect When Going Cloud Native

Figure 1: Ampere cloud native platform performs significantly higher in critical cloud workloads than x86 platforms. Image from "Core Sustainability of Cloud Native Processors".

Cloud native brings higher response speed, consistency and predictability

For applications that provide network services, the response time of user requests is a key performance metric. Response speed depends on load and scalability; maintaining acceptable end-user response time is critical as request rates rise.

While peak performance is important, many applications must meet specific service level agreements (SLAs), such as responding within two seconds. Therefore, cloud operations teams usually measure response speed using P99 latency (i.e., 99% of requests get a satisfying response time in that time).

To measure P99 latency, we increase the number of requests to the service to determine where 99% of transactions are still completed within the required SLA. This allows us to evaluate the maximum throughput possible while maintaining the SLA and to evaluate the impact on performance as the number of users increases.

Consistency and predictability are two major factors that affect overall latency and response speed. Response speed is more predictable when task performance is more consistent. In other words, the smaller the difference in latency and performance, the more predictable the response speed of a task is. Predictability also helps simplify workload balancing.

As mentioned in the first part of this series, the x86 core uses hyperthreading technology to improve core utilization. Since two threads share a core, it is difficult to guarantee SLA. Hyperthreading overhead and inherent inconsistencies in other x86 architecture issues result in greater latency differences between tasks compared to Ampere cloud native processors (see Figure 2). Therefore, x86-based platforms can maintain high peak performance, but will soon surpass SLA due to high latency differences. Furthermore, the stricter the SLA (i.e. seconds vs. milliseconds), the greater the impact of this difference on P99 latency and response speed.

Accelerating the Cloud: What to Expect When Going Cloud Native

Figure 2: Hyperthreading and other x86 architecture issues lead to increased latency differences, which negatively affects throughput and SLA. Image from "Core Sustainability of Cloud Native Processors".

In this case, the only way to reduce the latency is to reduce the request rate. In other words, to ensure SLA, more x86 resources must be allocated to ensure each core is running under lower loads, thus solving the problem of large differences in response speeds between threads under high loads. Therefore, x86-based applications are subject to more restrictions on the number of requests they can manage while still maintaining their SLAs.

Comparison chart of performance and energy efficiency of NGINX, Redis, h.264 media encoding and Memcached

Accelerating the Cloud: What to Expect When Going Cloud Native Accelerating the Cloud: What to Expect When Going Cloud Native Accelerating the Cloud: What to Expect When Going Cloud Native Accelerating the Cloud: What to Expect When Going Cloud Native

Cloud native brings higher cost-effectiveness

Cloud native methods can provide SLA with consistent response speed and higher performance in a repeatable way, which also means higher cost performance. This directly reduces operational costs because more requests can be managed with fewer cores. In short, cloud native platforms enable applications to do more with less cores without affecting SLA. Higher utilization translates directly into lower operating costs – because you need fewer cloud-native cores to manage the same load compared to x86-based platforms.

So, how much can you save? The basic computing unit of cloud computing is vCPU. However, for x86-based platforms, each x86 core runs two threads, so if you want to disable hyperthreading, you must rent x86 vCPU in pairs. Otherwise, the application will share the x86 core with another application.

On cloud native platforms, the entire core is allocated when renting vCPUs. Considering 1) a single Ampere-based vCPU on a cloud service provider (CSP) provides a full Ampere core, 2) Ampere provides more cores per slot, correspondingly higher performance per watt, and 3) Ampere vCPU's The cost per hour is usually lower because of higher core density and lower operating costs, which leads to a 4.28x cost/performance advantage of Ampere cloud native platforms for some cloud-native workloads.

Higher energy efficiency, better sustainability and lower operating costs

Power consumption is a global problem, and power consumption management is rapidly becoming one of the main challenges facing cloud service providers. Currently, data centers consume 1% to 3% of the world's electricity, and this proportion is expected to double by 2032. In 2022, cloud data centers are expected to account for 80% of this energy demand.

As its architecture has evolved for more than 40 years for different use cases, Intel x86 cores consume more power than most cloud-based microservices-based applications. In addition, the rack's power budget and the heat dissipation of these cores make it impossible for CSP to fill the rack with an x86 server. Given the power and thermal limitations of the x86 processor, the CSP may need to leave space in the rack, wasting valuable space. In fact, by 2025, traditional (x86) cloud computing approaches are expected to double data center power demand and increase real estate demand by 1.6 times.

Accelerating the Cloud: What to Expect When Going Cloud Native

Figure 7: The power and space required to continue the expected growth of data centers. Image from "Core Sustainability of Cloud Native Processors".

Considering cost and performance, cloud computing needs to move from general-purpose x86 computing to a cloud-native platform that is more energy-efficient and higher performance. Specifically, we need to have higher core density in the data center, as well as high-performance cores that are more efficient, more energy-efficient, and less operational costs.

Because Ampere cloud native platform is designed for energy efficiency, applications consume less power without compromising performance or response speed. Figure 8 below shows the power consumption of large-scale workloads running on x86-based platforms and Ampere cloud native platforms. Depending on the application, Ampere's performance per watt (measured by performance per watt) is significantly higher than the x86 platform.

Accelerating the Cloud: What to Expect When Going Cloud Native

Figure 8: Ampere cloud native platform has significantly higher energy efficiency in critical cloud workloads than x86 platforms. Image from "Core Sustainability of Cloud Native Processors".

The low-power architecture of cloud native platforms enables higher core density per rack. For example, the high core count of Ampere® Altra® (80 cores) and Altra Max (128 cores) enable CSPs to achieve incredible core density. With Altra Max, a 1U chassis with two slots can have 256 cores in a single rack (see Figure 8).

With cloud native processors, developers and architects no longer have to choose between low power and high performance. The Altra series processor architecture offers higher computing power—up to 2.5x performance per rack—and three times the number of racks required to get the same computing performance as traditional x86 processors. The efficiency architecture of cloud-native processors also provides the industry's best cost per watt.

Accelerating the Cloud: What to Expect When Going Cloud Native

Figure 9: The inefficiency of the x86 platform results in idle rack capacity, while the high energy efficiency of the Ampere Altra Max makes full use of all available space.

The advantages are impressive. By 2025, cloud-native applications running in Ampere-based cloud data centers can reduce power demand to an estimated 80% of current usage. Meanwhile, real estate demand is expected to fall by 70% (see Figure 7 above). Ampere cloud native platform offers a 3x performance per watt advantage, effectively tripling the capacity of the data center while remaining the same.

Please note that this cloud-native method does not require advanced liquid cooling technology. While liquid cooling does increase the density of x86 cores in a rack, it brings higher costs without new value. Cloud-native platforms delay the demand for this advanced cooling to a further future by enabling CSPs to do more with their existing real estate and power capacity.

The energy efficiency of cloud native platforms means more sustainable cloud deployment (see Figure 10 below). It also allows companies to reduce their carbon footprint, a factor that is increasingly valued by stakeholders such as investors and consumers. Meanwhile, CSPs will be able to support more computing power to meet the growing demand within their existing property capacity and power constraints. To provide additional competitive advantages, CSPs seeking to expand their cloud-native market will include power spending in computing resource pricing—which will provide competitive advantages for cloud-native platforms.

Accelerating the Cloud: What to Expect When Going Cloud Native

Figure 10: Why cloud-native computing is crucial to sustainability. Image from "Core Sustainability of Cloud Native Processors".

Cloud native achieves higher response speed and scalability performance

Cloud computing enables companies to get rid of large monolithic applications and move to application components (or microservices) that can create more copies of components on demand for scaling. Because these cloud-native applications are distributed in nature and designed for cloud deployment, they can seamlessly scale to 100,000 users on cloud-native platforms.

For example, if you deploy multiple MYSQL containers, you need to ensure that each container has stable performance. With Ampere, each application has its own core. There is no need to verify isolation from another thread, nor the overhead of managing hyperthreading. Instead, each application provides consistent, predictable, and repeatable performance with seamless scalability.

Another advantage of turning to cloud native is linear scalability. In short, every cloud-native core improves performance in a linear way compared to x86 performance—and x86 performance decreases as utilization increases. Figure 11 below illustrates the H.264 encoding.

Accelerating the Cloud: What to Expect When Going Cloud Native

Figure 11: Ampere cloud native computing linearly expands, which will not cause capacity idle, which is different from x86 computing. Image from "Core Sustainability of Cloud Native Processors".

Summary of cloud native advantages

It is obvious that the current x86 technology will not be able to meet increasingly stringent power restrictions and regulations. Thanks to its efficient architecture, the Ampere cloud native platform provides 2x performance per core than the x86 architecture. Additionally, lower latency differences lead to higher consistency, higher predictability, and better response speeds—enable you to meet SLAs without having to over-configure compute resources significantly. The simplified architecture of cloud-native platforms also brings higher energy efficiency, resulting in more sustainable operations and lower operating costs.

Proof of cloud native efficiency and scalability is best reflected in high load periods, such as serving 100,000 users. This is where the consistency of Ampere cloud native platform brings huge advantages, with a price/performance of 4.28 times higher than x86 in large-scale cloud native applications while still maintaining customer SLAs.

In the fifth part of this series, we will cover how to work with partners to start leveraging cloud-native platforms right away while minimizing investment or risk.

Please visit the Ampere Computing Developer Center for more related content and the latest news. You can also sign up for the Ampere Computing developer newsletter or join the Ampere Computing developer community.

We have written this article in collaboration with Ampere Computing. Thank you for supporting the partners who made SitePoint possible.

The above is the detailed content of Accelerating the Cloud: What to Expect When Going Cloud Native. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template