Home > Technology peripherals > AI > body text

Apple builds an open source framework MLX for its own chips, implements Llama 7B and runs it on M2 Ultra

王林
Release: 2023-12-14 23:49:01
forward
489 people have browsed it

In November 2020, Apple launched the M1 chip, which was astonishingly fast and powerful. Apple will launch M2 in 2022, and in October this year, the M3 chip will officially debut.

When Apple releases its chips, it attaches great importance to its AI model training and deployment capabilities

The ML Compute launched by Apple can be used on Mac The TensorFlow model is trained on. PyTorch supports GPU-accelerated PyTorch machine learning model training on the M1 version of Mac, using Apple Metal Performance Shaders (MPS) as the backend. These enable Mac users to train neural networks locally.

Apple announced the launch of an open source array framework specifically for machine learning, which will run on Apple chips and is called MLX

苹果为自家芯片打造开源框架MLX,实现Llama 7B并在M2 Ultra上运行

MLX is a framework specifically designed for machine learning researchers to efficiently train and deploy AI models. The design concept of this framework is simple and easy to understand. Researchers can easily extend and improve MLX to quickly explore and test new ideas. The design of MLX is inspired by frameworks such as NumPy, PyTorch, Jax and ArrayFire

苹果为自家芯片打造开源框架MLX,实现Llama 7B并在M2 Ultra上运行

Project address: https://github .com/ml-explore/mlx

One of the MLX project contributors and Apple Machine Learning Research Team (MLR) research scientist Awni Hannun demonstrated a section using the MLX framework to implement Llama 7B and Video running on M2 Ultra.

苹果为自家芯片打造开源框架MLX,实现Llama 7B并在M2 Ultra上运行

MLX quickly attracted the attention of machine learning researchers. Chen Tianqi, author of TVM, MXNET and XGBoost, assistant professor at CMU and CTO of OctoML, retweeted: "Apple chips have a new deep learning framework."

苹果为自家芯片打造开源框架MLX,实现Llama 7B并在M2 Ultra上运行

Some people think that Apple has "repeated the same mistakes" again. This is an evaluation of MLX

苹果为自家芯片打造开源框架MLX,实现Llama 7B并在M2 Ultra上运行

In order to keep the original meaning unchanged, the content needs to be rewritten into Chinese. The original sentence does not need to appear

MLX features and examples

In this project, we can observe that MLX has the following main features

Familiar API. MLX has a Python API that is very NumPy-like, as well as a full-featured C API that is very similar to the Python API. MLX also has more advanced packages (such as mlx.nn and mlx.optimizers) whose APIs are very similar to PyTorch and can simplify building more complex models.

Combinable function transformation. MLX features composable function transformations with automatic differentiation, automatic vectorization, and computational graph optimization.

Lazy calculation. Computation in MLX is lazy and arrays are instantiated only when needed.

Dynamic graph construction. The calculation graph construction in MLX is dynamic, changing the shape of function parameters will not cause compilation to slow down, and debugging is simple and easy to use.

Multiple devices. Operations can be run on any supported device such as CPU and GPU.

Unified Memory. The significant difference between MLX and other frameworks is unified memory, array shared memory. Operations on MLX can run on any supported device type without moving data.

In addition, the project provides a variety of examples of using the MLX framework, such as the MNIST example, which can well help you learn how to use MLX

苹果为自家芯片打造开源框架MLX,实现Llama 7B并在M2 Ultra上运行

Image source: https://github.com/ml-explore/mlx-examples/tree/main/mnist

In addition to the above examples , MLX also provides other more practical examples, such as:

  • Transformer language model training;
  • LLaMA large-scale text generation and LoRA fine-tuning;
  • Stable Diffusion generation Image;
  • OpenAI’s Whisper speech recognition.

For more detailed documentation, please refer to: https://ml-explore.github.io/mlx/build/html/install.html

#

The above is the detailed content of Apple builds an open source framework MLX for its own chips, implements Llama 7B and runs it on M2 Ultra. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template