Home > Technology peripherals > AI > How to Use DeepSeek Janus-Pro Locally

How to Use DeepSeek Janus-Pro Locally

Christopher Nolan
Release: 2025-03-01 10:00:21
Original
152 people have browsed it

DeepSeek, a Chinese AI innovator, has significantly impacted the global AI landscape, causing a $1 trillion decline in US stock market valuations and unsettling tech giants like Nvidia and OpenAI. Its rapid rise to prominence is due to its leading-edge text generation, reasoning, vision, and image generation models. A recent highlight is the launch of its cutting-edge Janus series of multimodal models. This tutorial details setting up a local Docker container to run the Janus model and explore its capabilities.

How to Use DeepSeek Janus-Pro Locally

Image by Author

This guide covers setting up a Janus project, building a Docker container for local execution, and testing its image and text processing capabilities. Further exploration of DeepSeek's disruptive models is available via these resources:

  • DeepSeek-V3: A Guide With Demo Project
  • DeepSeek-R1: Features, o1 Comparison, Distilled Models & More

Introducing the DeepSeek Janus Series

The DeepSeek Janus Series represents a new generation of multimodal models, designed to seamlessly integrate visual comprehension and generation using advanced frameworks. The series comprises Janus, JanusFlow, and the high-performance Janus-Pro, each iteration improving efficiency, performance, and multimodal functionality.

1. Janus: A Unified Approach

Janus employs a novel autoregressive framework, separating visual encoding into distinct pathways for understanding and generation while leveraging a unified transformer architecture. This design resolves inherent conflicts between these functions, boosting flexibility and efficiency. Janus's performance rivals or surpasses specialized models, making it a prime candidate for future multimodal systems.

2. JanusFlow: Rectified Flow Integration

JanusFlow integrates autoregressive language modeling with rectified flow, a leading generative modeling technique. Its streamlined design simplifies training within large language model frameworks, eliminating complex modifications. Benchmark results show JanusFlow outperforming both specialized and unified approaches, advancing the state-of-the-art in vision-language modeling.

3. Janus-Pro: Optimized Performance

Janus-Pro builds upon its predecessors by incorporating optimized training methods, expanded datasets, and larger model sizes. These enhancements significantly improve multimodal understanding, text-to-image instruction following, and the stability of text-to-image generation.

How to Use DeepSeek Janus-Pro Locally

Source: deepseek-ai/Janus

For a deeper dive into the Janus series, access methods, and comparisons with OpenAI's DALL-E 3, see DeepSeek's Janus-Pro: Features, DALL-E 3 Comparison & More.

Setting up Your Janus Project

While Janus is a relatively new model, lacking readily available quantized versions or local applications for easy desktop/laptop use, its GitHub repository offers a Gradio web application demo. However, this demo frequently encounters package conflicts. This project addresses this by modifying the code, building a custom Docker image, and running it locally using Docker Desktop.

1. Docker Desktop Installation

Begin by downloading and installing the latest Docker Desktop version from the official Docker website.

Windows Users: Windows users will also need the Windows Subsystem for Linux (WSL). Install it via your terminal with:

<code>wsl --install</code>
Copy after login

2. Cloning the Janus Repository

Clone the Janus repository and navigate to the project directory:

<code>git clone https://github.com/deepseek-ai/Janus.git
cd Janus</code>
Copy after login

3. Modifying the Demo Code

In the demo folder, open app_januspro.py. Make these changes:

  1. Model Name Change: Replace deepseek-ai/Janus-Pro-7B with deepseek-ai/Janus-Pro-1B. This uses the smaller (4.1 GB) model, better suited for local use.

How to Use DeepSeek Janus-Pro Locally

  1. Update demo.queue Function: Modify the last line to:
<code>demo.queue(concurrency_count=1, max_size=10).launch(
    server_name="0.0.0.0", server_port=7860
)</code>
Copy after login

How to Use DeepSeek Janus-Pro Locally

This ensures Docker URL and port compatibility.

4. Creating the Docker Image

Create a Dockerfile in the project's root directory with this content:

<code># Use the PyTorch base image
FROM pytorch/pytorch:latest

# Set the working directory inside the container
WORKDIR /app

# Copy the current directory into the container
COPY . /app

# Install necessary Python packages
RUN pip install -e .[gradio]

# Set the entrypoint for the container to launch your Gradio app
CMD ["python", "demo/app_januspro.py"]</code>
Copy after login

This Dockerfile will:

  • Use a PyTorch base image.
  • Set the container's working directory.
  • Copy project files to the container.
  • Install dependencies.
  • Launch the Gradio application.

Building and Running the Docker Image

After creating the Dockerfile, build and run the Docker image. Consider taking an Introduction to Docker course for foundational knowledge.

Build the image using:

<code>docker build -t janus .</code>
Copy after login

(This may take 10-15 minutes depending on your internet connection.)

How to Use DeepSeek Janus-Pro Locally

Start the container with GPU support, port mapping, and persistent storage:

<code>docker run -it -p 7860:7860 -d -v huggingface:/root/.cache/huggingface -w /app --gpus all --name janus janus:latest</code>
Copy after login

Monitor progress in the Docker Desktop application's "Containers" and "Logs" tabs. The model download from Hugging Face Hub will be visible in the logs.

How to Use DeepSeek Janus-Pro Locally How to Use DeepSeek Janus-Pro Locally How to Use DeepSeek Janus-Pro Locally

Access the application at: http://localhost:7860/. For troubleshooting, refer to the updated Janus project at kingabzpro/Janus: Janus-Series.

Testing the Janus Pro Model

The web app provides a user-friendly interface. This section demonstrates Janus Pro's multimodal understanding and text-to-image generation.

Multimodal Understanding Tests

To test multimodal understanding, upload an image and request an explanation. Even with the smaller 1B model, the results are highly accurate.

How to Use DeepSeek Janus-Pro Locally

Similarly, testing with an infographic demonstrates accurate summarization of textual content within the image.

How to Use DeepSeek Janus-Pro Locally

Text-to-Image Generation Tests

The "Text-to-Image Generation" section allows for testing with custom prompts. The model generates five variations, which may take several minutes.

How to Use DeepSeek Janus-Pro Locally

The generated images are comparable in quality and detail to Stable Diffusion XL. A more complex prompt is also tested below, demonstrating the model's ability to handle intricate descriptions.

How to Use DeepSeek Janus-Pro Locally

Prompt Example: (Detailed description of an eye with ornate surroundings)

How to Use DeepSeek Janus-Pro Locally

Conclusion

For comprehensive testing, DeepSeek's Hugging Face Spaces deployment (Chat With Janus-Pro-7B) provides access to the full model capabilities. The Janus Pro model's accuracy, even with smaller variants, is noteworthy.

This tutorial detailed Janus Pro's multimodal capabilities and provided instructions for setting up a local, efficient solution for private use. Further learning is available via our guide on Fine-Tuning DeepSeek R1 (Reasoning Model).

The above is the detailed content of How to Use DeepSeek Janus-Pro Locally. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template