Home > Technology peripherals > It Industry > Accelerating the Cloud: The Final Steps

Accelerating the Cloud: The Final Steps

Jennifer Aniston
Release: 2025-02-08 10:32:09
Original
858 people have browsed it

Accelerating the Cloud: The Final Steps

(This article is the fifth part of Ampere Computing’s “Accelerating Cloud Computing” series. You can read all articles on SitePoint.)

The final step in cloud-native application development is to decide where to start. As the final issue of this series, we will explore how to develop cloud-native application, where to start the process within your organization, and the various situations you may encounter in the process.

As shown in other parts of this series, cloud native platforms are quickly becoming a powerful alternative to x86-based computing. As we demonstrated in Part IV, there is a huge difference between full-core Ampere vCPU and semi-core x86 vCPU in terms of performance, predictability, and energy efficiency.

How to develop cloud-native application

The natural way to design, implement, and deploy distributed applications for cloud-native computing environments is to break the application down into smaller components or microservices, each responsible for a specific task. In these microservices, you usually have multiple technical elements to jointly provide the functionality. For example, your order management system might contain a private data store (which may be used to cache orders and customer information in memory) and a session manager to handle customers’ shopping carts, in addition to an API manager to enable front-end services to be able to Interact with it. Additionally, it may be connected to an inventory service to determine merchandise availability, possibly a delivery module to determine shipping and delivery dates, and a payment service to collect payments.

The distributed nature of cloud computing enables applications to scale with requirements and independently maintain application components in a way that cannot be achieved by single software. If you have a lot of traffic to your e-commerce website, you can extend the front end independently of the inventory service or payment engine, or add more worker programs to handle order management. The design goal of cloud-native applications is to avoid other components being affected by isolating failures in one component, rather than a single large application, where one of the failures can cause global system failures.

In addition, the cloud-native approach allows the software to take full advantage of available hardware capabilities by simply creating the servers needed to handle the current load and shutting down resources during off-peak hours. Modern cloud native CPUs like Ampere provide a huge number of fast CPU cores and fast interconnects, allowing software architects to effectively scale their applications.

In the second and third parts of this series, we show that migrating applications to ARM-based cloud-native platforms is relatively simple. In this article, we will describe the steps that usually need to be taken to make this migration successful.

Where to start within your organization

The first step in migrating to Ampere's cloud-native Arm64 processor is to select the right application. Some applications that are tightly coupled to other CPU architectures may be more difficult to migrate, because they have source code dependencies on specific instruction sets, or because of performance or functional limitations associated with instruction sets. However, Ampere processors are often well designed for many cloud applications, including:

  • Microservices applications, stateless services: If your application is broken down into components that can be independently extended as needed, the Ampere processor is perfect for it. The key part of breaking down applications and leveraging the benefits provided by the cloud is separating stateful and stateless services. Stateless application components can be horizontally scaled, providing higher capacity as needed, while using stateful services such as databases to store non-transitory data. Scaling stateless services is easy because you can load balancing between many replicas of services, adding more cores to your compute infrastructure to cope with the increase in demand. Thanks to Ampere's single-threaded CPU design, you can run these cores at higher loads without affecting application latency, reducing overall price/performance.
  • Audio or video transcoding: Converting data from one codec to another (e.g., in a video playback application or as part of an IP phone system) is computationally intensive, but usually Not floating point intensive and can be well extended to many sessions by adding more worker programs. Therefore, this type of load performs very well on the Ampere platform and can provide a price/performance advantage over 30% higher than other platforms.
  • AI Inference: While training AI models can benefit from the availability of very fast GPUs for training, applying the model to data is not very floating point intensive when deployed in a production environment. In fact, low-precision 16-bit floating-point operations can be used to satisfy the performance and quality SLA of AI model inference and can run well on a general purpose processor. Additionally, AI inference can benefit from adding more worker and kernels to respond to changes in transaction volume. All in all, this means that modern cloud native platforms like Ampere will offer excellent price/performance.
  • In-memory database: Because Ampere kernels are designed to have large L2 caches per kernel, they usually perform very well in memory intensive workloads such as object and query caches as well as in-memory databases. Database workloads such as Redis, Memcached, MongoDB, and MySQL can leverage large caches per core to improve performance. - Continuous Integration Build Farm: Build software can be very compute-intensive and parallelizable. Running builds and integration tests as part of a continuous integration practice and using continuous delivery practices to validate new versions that are about to enter production can benefit from running on Ampere CPUs. As part of the migration to the Arm64 architecture, building and testing your software on that architecture is a prerequisite, and performing this work on native Arm64 hardware will increase the performance of the build and increase the throughput of the development team.

Analyze your application dependencies

Once you have selected the application you think is suitable for migration, your next step is to determine the work you need to update the dependency stack. The dependency stack will include the host or guest operating system, the programming language and the runtime, and any application dependencies your service may have.The Arm64 instruction set used in Ampere CPUs has only become prominent in recent years, and many projects have worked hard to improve the performance of Arm64 in recent years. Therefore, a common topic in this section is “Newer versions will be better”.

  • Operating System: As Arm64 architecture has made huge progress over the past few years, you may want to run an updated operating system to take advantage of performance improvements. For Linux distributions, any recent mainstream distribution will provide you with a native Arm64 binary installation media or Docker base image. If your application is currently using an older operating system, such as Red Hat Enterprise Linux 6 or 7, or Ubuntu 16.04 or 18.04, you may want to consider updating the base operating system.
  • Language Runtime/Compiler: All modern programming languages ​​are available for Arm64, but the latest versions of popular languages ​​may include additional performance optimizations. It is worth noting that the latest versions of Java, Go, and .NET have significantly improved performance on Arm64.
  • Application Dependencies: In addition to the operating system and programming language, you need to consider other dependencies. This means checking the third-party libraries and modules your application uses, verifying that each of these libraries is available for Arm64 and packaged for your distribution, while taking into account external dependencies such as databases, antivirus software as needed and other applications. Dependency analysis should include multiple factors, including the availability of Arm64 dependencies and any performance impacts that would have if these dependencies have platform-specific optimizations. In some cases, you can migrate when some functionality is lost, while in others, migration may require engineering work to adapt to optimization of Arm64 architecture.

Build and test software on Arm64

The availability of Arm64 computing resources on cloud service providers (CSPs) has expanded recently and is still growing. As you can see from the Where to Try and Where to Buy pages on the Ampere Computing website, the availability of Arm64 hardware is not an issue, whether in your data center or on a cloud platform.

Once you have access to an Ampere instance (bare metal or virtual machine), you can start the build and test phase of the migration. As we said above, most modern languages ​​now fully support Arm64 as a first-class platform. For many projects, the build process is as simple as recompiling your binary or deploying your Java code to an Arm64 native JVM.

However, sometimes problems in the software development process can lead to some "technical debt" that the team may have to pay back during the migration process. This can take many forms. For example, developers can make assumptions about the availability of certain hardware features or implementation-specific behaviors that are not defined in the standard. For example, the char data type can be defined as signed or unsigned characters according to implementation, and in Linux on x86 it is signed (i.e., it ranges from -128 to 127). However, on Arm64 using the same compiler, it is unsigned (range 0 to 255). Therefore, code that depends on the char data type symbol will not work properly.

However, overall, code that complies with standards and code that does not rely on x86 specific hardware features such as SSE can be easily built on Ampere processors. Most continuous integration tools (tools that manage automated construction and testing across support platform matrices), such as Jenkins, CircleCI, Travis, GitHub Actions, etc., support Arm64 building nodes.

Manage application deployment in production

We can now see what changes will happen to your infrastructure management when deploying your cloud-native applications to production. The first thing to note is that you don't have to move the entire app at once - you can choose the parts of the app that will benefit the most from migrating to Arm64 and start with those parts. Most managed Kubernetes services support heterogeneous infrastructure in a single cluster. Annoyingly, different CSPs have different names for the mechanisms for mixing different types of compute nodes in a single Kubernetes cluster, but all major CSPs now support this feature. Once you have an Ampere compute pool in your Kubernetes cluster, you can use "blot" and "tolerance" to define the node affinity of the container - requiring them to run on nodes with arch=arm64.

If you have been building project containers for Arm64 architecture, it is very simple to create a manifest that will be a multi-architecture container. This is essentially a manifest file containing a pointer to multiple container images, and the container runs when selecting the image based on the host architecture.

The main problems people usually encounter during the deployment phase can again be classified as "technical debt". Deployment and automation scripts can assume certain platform-specific path names, or be hardcoded to rely on x86-only binary artifacts. Additionally, the schema strings returned by different Linux distributions may vary by distribution. You may encounter x86, x86-64, x86_64, arm64, aarch64. Normalizing these platform differences may be something you have never done in the past, but it will be very important as part of the platform transformation.

The last component of the platform conversion is the operation of the application. Cloud-native applications include a large number of scaffolding in production to ensure they work properly. These include log management to centralize events, monitoring to allow administrators to verify that things work as expected, alerts to alert in the event of anomalies, intrusion detection tools, application firewalls, or other security tools to protect your application from malicious intentions Aggression by the actor. These will take some time to invest to ensure proper proxy and infrastructure is activated for application nodes, but since all major monitoring and security platforms now support Arm64 as a platform, ensuring you can view the internal work of the application will not usually constitute Big problem. In fact, many of the largest observability software-as-a-service platforms are increasingly migrating their application platforms to Ampere and other Arm64 platforms to take advantage of the cost savings that the platform offers.

Improve your bottom line

The shift to cloud-native processors can be huge, making migration investments well worth the effort. With this approach, you can also evaluate and validate the operational savings your organization can expect over time.

Note that one of the biggest obstacles to improving performance is inertia, and the tendency of organizations to continue to do what they have been doing, even if it is no longer the most efficient or cost-effective way. That's why we recommend taking the first step to prove the value of cloud-native technology to your organization. In this way, you will have real results to share with stakeholders and show them how cloud-native computing can improve application performance and responsiveness without significant investment or risk.

Cloud native processors have appeared. The question is not whether to switch to cloud native, but when you convert. Organizations that embrace the future earlier will benefit from today, which will give them a huge advantage over competitors tied to tradition.

Learn more about developing at the cloud speed at the Ampere Developer Center, which contains resources for designing, building, and deploying cloud applications. When you are ready to experience the benefits of cloud-native computing for yourself, ask your CSP for their cloud-native options based on the Ampere Altra series and AmpereOne technology.

The above is the detailed content of Accelerating the Cloud: The Final Steps. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template