Home > Technology peripherals > AI > How to Run DeepSeek Models Locally in 5 Minutes?

How to Run DeepSeek Models Locally in 5 Minutes?

Christopher Nolan
Release: 2025-03-07 09:59:09
Original
638 people have browsed it

DeepSeek has taken the AI community by storm, with 68 models available on Hugging Face as of today. This family of open-source models can be accessed through Hugging Face or Ollama, while DeepSeek-R1 and DeepSeek-V3 can be directly used for inference via DeepSeek Chat. In this blog, we’ll explore DeepSeek’s model lineup and guide you through running these models using Google Colab and Ollama.

Table of contents

  • Overview of DeepSeek Models
  • Running DeepSeek R1 on Ollama
    • Step 1: Install Ollama
    • Step 2: Pull the DeepSeek R1 Model
    • Step 3: Run the Model Locally
  • Running DeepSeek-Janus-Pro-1B on Google Colab
    • Step 1: Clone the DeepSeek-Janus Repository
    • Step 2: Install Dependencies
    • Step 3: Load the Model and Move It to GPU
    • Step 4: Pass an Image for Processing
  • Conclusion 

Overview of DeepSeek Models

DeepSeek offers a diverse range of models, each optimized for different tasks. Below is a breakdown of which model suits your needs best:

  • For Developers & Programmers: The DeepSeek-Coder and DeepSeek-Coder-V2 models are designed for coding tasks such as writing and debugging code.
  • For General Users: The DeepSeek-V3 model is a versatile option capable of handling a wide range of queries, from casual conversations to complex content generation.
  • For Researchers & Advanced Users: The DeepSeek-R1 model specializes in advanced reasoning and logical analysis, making it ideal for problem-solving and research applications.
  • For Vision Tasks: The DeepSeek-Janus family and DeepSeek-VL models are tailored for multimodal tasks, including image generation and processing.

Also Read: Building AI Application with DeepSeek-V3

Running DeepSeek R1 on Ollama

Step 1: Install Ollama

To run DeepSeek models on your local machine, you need to install Ollama:

  • Download Ollama: Click here to download
  • For Linux users: Run the following command in your terminal:bashCopyEdit
curl -fsSL https://ollama.com/install.sh | sh
Copy after login
Copy after login

Step 2: Pull the DeepSeek R1 Model

Once Ollama is installed, open your Command Line Interface (CLI) and pull the model:

ollama pull deepseek-r1:1.5b
Copy after login
Copy after login

You can explore other DeepSeek models available on Ollama here: Ollama Model Search.

This step may take some time, so wait for the download to complete.

ollama pull deepseek-r1:1.5b

pulling manifest 
pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB                         
pulling 369ca498f347... 100% ▕████████████████▏  387 B                         
pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB                         
pulling f4d24e9138dd... 100% ▕████████████████▏  148 B                         
pulling a85fe2a2e58e... 100% ▕████████████████▏  487 B                         
verifying sha256 digest 
writing manifest 
success 
Copy after login
Copy after login

Step 3: Run the Model Locally

Once the model is downloaded, you can run it using the command:

curl -fsSL https://ollama.com/install.sh | sh
Copy after login
Copy after login

How to Run DeepSeek Models Locally in 5 Minutes?

The model is now available to use on the local machine and is answering my questions without any hiccups.

Running DeepSeek-Janus-Pro-1B on Google Colab

In this section, we’ll try out DeepSeek-Janus-Pro-1B using Google Colab. Before starting, make sure to set the runtime to T4 GPU for optimal performance.

Step 1: Clone the DeepSeek-Janus Repository

Run the following command in a Colab notebook:

ollama pull deepseek-r1:1.5b
Copy after login
Copy after login

? Explore more DeepSeek models on GitHub: DeepSeek AI GitHub Repository

Step 2: Install Dependencies

Navigate to the cloned directory and install the required packages:

ollama pull deepseek-r1:1.5b

pulling manifest 
pulling aabd4debf0c8... 100% ▕████████████████▏ 1.1 GB                         
pulling 369ca498f347... 100% ▕████████████████▏  387 B                         
pulling 6e4c38e1172f... 100% ▕████████████████▏ 1.1 KB                         
pulling f4d24e9138dd... 100% ▕████████████████▏  148 B                         
pulling a85fe2a2e58e... 100% ▕████████████████▏  487 B                         
verifying sha256 digest 
writing manifest 
success 
Copy after login
Copy after login

Step 3: Load the Model and Move It to GPU

Now, we’ll import necessary libraries and load the model onto CUDA (GPU):

ollama run deepseek-r1:1.5b
Copy after login

Step 4: Pass an Image for Processing

Now, let’s pass an image to the model and generate a response.

? Input Image

How to Run DeepSeek Models Locally in 5 Minutes?

Initializing the Prompt and System Role

!git clone https://github.com/deepseek-ai/Janus.git
Copy after login

Processing the Input

%cd Janus
!pip install -e .
!pip install flash-attn
Copy after login

Output:

<|User|>:
What’s in the image?

<|Assistant|>: The image features a section titled “Latest Articles” with a focus on a blog post. The blog post discusses “How to Access DeepSeek Janus Pro 7B?” and highlights its multimodal AI capabilities in reasoning, text-to-image, and instruction-following. The image also includes the DeepSeek logo (a dolphin) and a hexagonal pattern in the background.

We can see that the model is able to read the text in the image and also spot the Logo of DeepSeek in the image. Initial impressions, it is performing well.

Also Read: How to Access DeepSeek Janus Pro 7B?

Conclusion

DeepSeek is rapidly emerging as a powerful force in AI, offering a wide range of models for developers, researchers, and general users. As it competes with industry giants like OpenAI and Gemini, its cost-effective and high-performance models are likely to gain widespread adoption.

The applications of DeepSeek models are limitless, ranging from coding assistance to advanced reasoning and multimodal capabilities. With seamless local execution via Ollama and cloud-based inference options, DeepSeek is poised to become a game-changer in AI research and development.

If you have any questions or face issues, feel free to ask in the comments section!

The above is the detailed content of How to Run DeepSeek Models Locally in 5 Minutes?. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template