How to Run LLMs Locally in 1 Minute?
Large language models (LLMs), such as GPT and Llama, have completely changed the way we handle language tasks, from creating smart chatbots to generating complex code snippets, everything. Cloud platforms such as Hugging Face simplify the use of these models, but in some cases it is a smarter choice to run LLM locally on your own computer. Why? Because it provides greater privacy, allows customization to your specific needs and can significantly reduce costs. Running LLM locally gives you full control, allowing you to take advantage of its power on your own terms.
Let's see how to run LLM on your system with Ollama and Hugging Face in just a few simple steps!
The following video explains the process step by step:
How to run LLM locally in one minute [beginner friendly]
Use Ollama ? and Hugging Face ? Video link
— dylan (@dylanebert) January 6, 2025
Step to run LLM locally
Step 1: Download Ollama
First, search for "Ollama" on your browser, download and install it on your system.
Step 2: Find the best open source LLM
Next, search for the "Hugging Face LLM Ranking" to find a list of top open source language models.
Step 3: Filter the model based on your device
After seeing the list, apply a filter to find the best model for your setup. For example:
- Select home consumer-grade equipment.
- Select only official providers to avoid unofficial or unverified models.
- If your laptop is equipped with a low-end GPU, choose a model designed for edge devices.
Click on top-ranked models, such as Qwen/Qwen2.5-35B. In the upper right corner of the screen, click "Use this model". However, you can't find Ollama as an option here.
This is because Ollama uses a special format called gguf, which is a smaller, faster and quantitative version of the model.
(Note: Quantization will slightly reduce quality, but make it more suitable for local use.)
Get models in gguf format:
- Go to the "Quantity" section on the rankings - there are about 80 models available here. Sort these models by the most downloads.
Look for models with "gguf" in their names, such as Bartowski. This is a good choice.
- Select this model and click "Use this model with Ollama".
- For quantization settings, select a file size that is 1-2GB smaller than your GPU RAM, or select the recommended option, such as Q5_K_M.
Step 5: Download and start using the model
Copy the commands provided for the model of your choice and paste them into your terminal. Press the "Enter" key and wait for the download to complete.
After the download is complete, you can start chatting with the model like you would with any other LLM. Simple and fun!
That's it! You are now running powerful LLM locally on your device. Please tell me if these steps work for you in the comments section below.
The above is the detailed content of How to Run LLMs Locally in 1 Minute?. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The article reviews top AI art generators, discussing their features, suitability for creative projects, and value. It highlights Midjourney as the best value for professionals and recommends DALL-E 2 for high-quality, customizable art.

Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

The article compares top AI chatbots like ChatGPT, Gemini, and Claude, focusing on their unique features, customization options, and performance in natural language processing and reliability.

ChatGPT 4 is currently available and widely used, demonstrating significant improvements in understanding context and generating coherent responses compared to its predecessors like ChatGPT 3.5. Future developments may include more personalized interactions and real-time data processing capabilities, further enhancing its potential for various applications.

The article discusses top AI writing assistants like Grammarly, Jasper, Copy.ai, Writesonic, and Rytr, focusing on their unique features for content creation. It argues that Jasper excels in SEO optimization, while AI tools help maintain tone consist

The article reviews top AI voice generators like Google Cloud, Amazon Polly, Microsoft Azure, IBM Watson, and Descript, focusing on their features, voice quality, and suitability for different needs.

2024 witnessed a shift from simply using LLMs for content generation to understanding their inner workings. This exploration led to the discovery of AI Agents – autonomous systems handling tasks and decisions with minimal human intervention. Buildin

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le
