Maison > interface Web > js tutoriel > le corps du texte

Exécuter Llama sur Android : un guide étape par étape pour utiliser Ollama

DDD
Libérer: 2024-10-11 14:40:01
original
1066 Les gens l'ont consulté

Running Llama  on Android: A Step-by-Step Guide Using Ollama

Llama 3.2 was recently introduced at Meta’s Developer Conference, showcasing impressive multimodal capabilities and a version optimized for mobile devices using Qualcomm and MediaTek hardware. This breakthrough allows developers to run powerful AI models like Llama 3.2 on mobile devices, paving the way for more efficient, private, and responsive AI applications.

Meta released four variants of Llama 3.2:

  • Multimodal models with 11 billion (11B) and 90 billion (90B) parameters.
  • Text-only models with 1 billion (1B) and 3 billion (3B) parameters.

The larger models, especially the 11B and 90B variants, excel in tasks like image understanding and chart reasoning, often outperforming other models like Claude 3 Haiku and even competing with GPT-4o-mini in certain cases. On the other hand, the lightweight 1B and 3B models are designed for text generation and multilingual capabilities, making them ideal for on-device applications where privacy and efficiency are key.

In this guide, we'll show you how to run Llama 3.2 on an Android device using Termux and Ollama. Termux provides a Linux environment on Android, and Ollama helps in managing and running large models locally.

Why Run Llama 3.2 Locally?

Running AI models locally offers two major benefits:

  1. Instantaneous processing since everything is handled on the device.
  2. Enhanced privacy as there is no need to send data to the cloud for processing.

Even though there aren’t many products that allow mobile devices to run models like Llama 3.2 smoothly just yet, we can still explore it using a Linux environment on Android.


Steps to Run Llama 3.2 on Android

1. Install Termux on Android

Termux is a terminal emulator that allows Android devices to run a Linux environment without needing root access. It’s available for free and can be downloaded from the Termux GitHub page.

For this guide, download the termux-app_v0.119.0-beta.1+apt-android-7-github-debug_arm64-v8a.apk and install it on your Android device.

2. Set Up Termux

After launching Termux, follow these steps to set up the environment:

  1. Grant Storage Access:
   termux-setup-storage
Copier après la connexion

This command lets Termux access your Android device’s storage, enabling easier file management.

  1. Update Packages:
   pkg upgrade
Copier après la connexion

Enter Y when prompted to update Termux and all installed packages.

  1. Install Essential Tools:
   pkg install git cmake golang
Copier après la connexion

These packages include Git for version control, CMake for building software, and Go, the programming language in which Ollama is written.

3. Install and Compile Ollama

Ollama is a platform for running large models locally. Here’s how to install and set it up:

  1. Clone Ollama's GitHub Repository:
   git clone --depth 1 https://github.com/ollama/ollama.git
Copier après la connexion
  1. Navigate to the Ollama Directory:
   cd ollama
Copier après la connexion
  1. Generate Go Code:
   go generate ./...
Copier après la connexion
  1. Build Ollama:
   go build .
Copier après la connexion
  1. Start Ollama Server:
   ./ollama serve &
Copier après la connexion

Now the Ollama server will run in the background, allowing you to interact with the models.

4. Running Llama 3.2 Models

To run the Llama 3.2 model on your Android device, follow these steps:

  1. Choose a Model:

    • Models like llama3.2:3b (3 billion parameters) are available for testing. These models are quantized for efficiency. You can find a list of available models on Ollama’s website.
  2. Download and Run the Llama 3.2 Model:

   ./ollama run llama3.2:3b --verbose
Copier après la connexion

The --verbose flag is optional and provides detailed logs. After the download is complete, you can start interacting with the model.

5. Managing Performance

While testing Llama 3.2 on devices like the Samsung S21 Ultra, performance was smooth for the 1B model and manageable for the 3B model, though you may notice lag on older hardware. If performance is too slow, switching to the smaller 1B model can significantly improve responsiveness.


Optional Cleanup

After using Ollama, you may want to clean up the system:

  1. Remove Unnecessary Files:
   chmod -R 700 ~/go
   rm -r ~/go
Copier après la connexion
  1. Move the Ollama Binary to a Global Path:
   cp ollama/ollama /data/data/com.termux/files/usr/bin/
Copier après la connexion

Now, you can run ollama directly from the terminal.


Conclusion

Llama 3.2 represents a major leap forward in AI technology, bringing powerful, multimodal models to mobile devices. By running these models locally using Termux and Ollama, developers can explore the potential of privacy-first, on-device AI applications that don’t rely on cloud infrastructure. With models like Llama 3.2, the future of mobile AI looks bright, allowing faster, more secure AI solutions across various industries.

Ce qui précède est le contenu détaillé de. pour plus d'informations, suivez d'autres articles connexes sur le site Web de PHP en chinois!

source:dev.to
Déclaration de ce site Web
Le contenu de cet article est volontairement contribué par les internautes et les droits d'auteur appartiennent à l'auteur original. Ce site n'assume aucune responsabilité légale correspondante. Si vous trouvez un contenu suspecté de plagiat ou de contrefaçon, veuillez contacter admin@php.cn
Tutoriels populaires
Plus>
Derniers téléchargements
Plus>
effets Web
Code source du site Web
Matériel du site Web
Modèle frontal