News on March 12, according to Google’s official blog, machine learning ML development and deployment are now affected by fragmented and isolated infrastructure, which may vary by framework, hardware, and use cases. This fragmentation limits developer speed and creates barriers to model portability, efficiency, and production.
To this end, 12 technology giants including Alibaba, Amazon AWS, AMD, Apple, Arm, Cerebras, Google, Graphcore, Hugging Face, Intel, Meta and NVIDIA announced the joint launch of the OpenXLA project (including XLA, StableHLO and IREE repository), enabling developers to compile and optimize models from all leading ML frameworks for efficient training and serving on a variety of hardware.
According to reports, the OpenXLA project provides a state-of-the-art ML compiler that can scale in complex ML infrastructure. This universal compiler can realize the true potential of AI by bridging disparate hardware devices to multiple frameworks in use today (e.g., TensorFlow, PyTorch) through OpenXLA to help accelerate the development and delivery of AI.
Developers using OpenXLA will see significant improvements in training time, throughput, service latency, and ultimately time to market and compute cost, Google said.
The OpenXLA project has been uploaded to GitHub, and the access link is attached to IT Home.
The above is the detailed content of To solve the problem of machine learning fragmentation, 12 giants including Alibaba, Apple, and Google launched OpenXLA. For more information, please follow other related articles on the PHP Chinese website!