(This article is the fifth part of Ampere Computing’s “Accelerating Cloud Computing” series. You can read all articles on SitePoint.)
The final step in cloud-native application development is to decide where to start. As the final issue of this series, we will explore how to develop cloud-native application, where to start the process within your organization, and the various situations you may encounter in the process.
As shown in other parts of this series, cloud native platforms are quickly becoming a powerful alternative to x86-based computing. As we demonstrated in Part IV, there is a huge difference between full-core Ampere vCPU and semi-core x86 vCPU in terms of performance, predictability, and energy efficiency.
The natural way to design, implement, and deploy distributed applications for cloud-native computing environments is to break the application down into smaller components or microservices, each responsible for a specific task. In these microservices, you usually have multiple technical elements to jointly provide the functionality. For example, your order management system might contain a private data store (which may be used to cache orders and customer information in memory) and a session manager to handle customers’ shopping carts, in addition to an API manager to enable front-end services to be able to Interact with it. Additionally, it may be connected to an inventory service to determine merchandise availability, possibly a delivery module to determine shipping and delivery dates, and a payment service to collect payments.
The distributed nature of cloud computing enables applications to scale with requirements and independently maintain application components in a way that cannot be achieved by single software. If you have a lot of traffic to your e-commerce website, you can extend the front end independently of the inventory service or payment engine, or add more worker programs to handle order management. The design goal of cloud-native applications is to avoid other components being affected by isolating failures in one component, rather than a single large application, where one of the failures can cause global system failures.
In addition, the cloud-native approach allows the software to take full advantage of available hardware capabilities by simply creating the servers needed to handle the current load and shutting down resources during off-peak hours. Modern cloud native CPUs like Ampere provide a huge number of fast CPU cores and fast interconnects, allowing software architects to effectively scale their applications.
In the second and third parts of this series, we show that migrating applications to ARM-based cloud-native platforms is relatively simple. In this article, we will describe the steps that usually need to be taken to make this migration successful.
The first step in migrating to Ampere's cloud-native Arm64 processor is to select the right application. Some applications that are tightly coupled to other CPU architectures may be more difficult to migrate, because they have source code dependencies on specific instruction sets, or because of performance or functional limitations associated with instruction sets. However, Ampere processors are often well designed for many cloud applications, including:
Once you have selected the application you think is suitable for migration, your next step is to determine the work you need to update the dependency stack. The dependency stack will include the host or guest operating system, the programming language and the runtime, and any application dependencies your service may have.The Arm64 instruction set used in Ampere CPUs has only become prominent in recent years, and many projects have worked hard to improve the performance of Arm64 in recent years. Therefore, a common topic in this section is “Newer versions will be better”.
The availability of Arm64 computing resources on cloud service providers (CSPs) has expanded recently and is still growing. As you can see from the Where to Try and Where to Buy pages on the Ampere Computing website, the availability of Arm64 hardware is not an issue, whether in your data center or on a cloud platform.
Once you have access to an Ampere instance (bare metal or virtual machine), you can start the build and test phase of the migration. As we said above, most modern languages now fully support Arm64 as a first-class platform. For many projects, the build process is as simple as recompiling your binary or deploying your Java code to an Arm64 native JVM.
However, sometimes problems in the software development process can lead to some "technical debt" that the team may have to pay back during the migration process. This can take many forms. For example, developers can make assumptions about the availability of certain hardware features or implementation-specific behaviors that are not defined in the standard. For example, the char data type can be defined as signed or unsigned characters according to implementation, and in Linux on x86 it is signed (i.e., it ranges from -128 to 127). However, on Arm64 using the same compiler, it is unsigned (range 0 to 255). Therefore, code that depends on the char data type symbol will not work properly.
However, overall, code that complies with standards and code that does not rely on x86 specific hardware features such as SSE can be easily built on Ampere processors. Most continuous integration tools (tools that manage automated construction and testing across support platform matrices), such as Jenkins, CircleCI, Travis, GitHub Actions, etc., support Arm64 building nodes.
We can now see what changes will happen to your infrastructure management when deploying your cloud-native applications to production. The first thing to note is that you don't have to move the entire app at once - you can choose the parts of the app that will benefit the most from migrating to Arm64 and start with those parts. Most managed Kubernetes services support heterogeneous infrastructure in a single cluster. Annoyingly, different CSPs have different names for the mechanisms for mixing different types of compute nodes in a single Kubernetes cluster, but all major CSPs now support this feature. Once you have an Ampere compute pool in your Kubernetes cluster, you can use "blot" and "tolerance" to define the node affinity of the container - requiring them to run on nodes with arch=arm64.
If you have been building project containers for Arm64 architecture, it is very simple to create a manifest that will be a multi-architecture container. This is essentially a manifest file containing a pointer to multiple container images, and the container runs when selecting the image based on the host architecture.
The main problems people usually encounter during the deployment phase can again be classified as "technical debt". Deployment and automation scripts can assume certain platform-specific path names, or be hardcoded to rely on x86-only binary artifacts. Additionally, the schema strings returned by different Linux distributions may vary by distribution. You may encounter x86, x86-64, x86_64, arm64, aarch64. Normalizing these platform differences may be something you have never done in the past, but it will be very important as part of the platform transformation.
The last component of the platform conversion is the operation of the application. Cloud-native applications include a large number of scaffolding in production to ensure they work properly. These include log management to centralize events, monitoring to allow administrators to verify that things work as expected, alerts to alert in the event of anomalies, intrusion detection tools, application firewalls, or other security tools to protect your application from malicious intentions Aggression by the actor. These will take some time to invest to ensure proper proxy and infrastructure is activated for application nodes, but since all major monitoring and security platforms now support Arm64 as a platform, ensuring you can view the internal work of the application will not usually constitute Big problem. In fact, many of the largest observability software-as-a-service platforms are increasingly migrating their application platforms to Ampere and other Arm64 platforms to take advantage of the cost savings that the platform offers.
The shift to cloud-native processors can be huge, making migration investments well worth the effort. With this approach, you can also evaluate and validate the operational savings your organization can expect over time.
Note that one of the biggest obstacles to improving performance is inertia, and the tendency of organizations to continue to do what they have been doing, even if it is no longer the most efficient or cost-effective way. That's why we recommend taking the first step to prove the value of cloud-native technology to your organization. In this way, you will have real results to share with stakeholders and show them how cloud-native computing can improve application performance and responsiveness without significant investment or risk.
Cloud native processors have appeared. The question is not whether to switch to cloud native, but when you convert. Organizations that embrace the future earlier will benefit from today, which will give them a huge advantage over competitors tied to tradition.
Learn more about developing at the cloud speed at the Ampere Developer Center, which contains resources for designing, building, and deploying cloud applications. When you are ready to experience the benefits of cloud-native computing for yourself, ask your CSP for their cloud-native options based on the Ampere Altra series and AmpereOne technology.
The above is the detailed content of Accelerating the Cloud: The Final Steps. For more information, please follow other related articles on the PHP Chinese website!