Table of Contents
Table of contents
System Design Principles
Expert Parallelism (EP)
Addressing Challenges of EP
Prefilling and Decoding Phases
Communication-Computation Overlapping
Diagram of DeepSeek’s Online Inference System
Performance Statistics
Cost and Revenue Analysis
Graph Overview
Conclusion
Home Technology peripherals AI DeepSeek #OpenSourceWeek Day 6: Inference System Overview

DeepSeek #OpenSourceWeek Day 6: Inference System Overview

Mar 22, 2025 am 10:26 AM

As we reach Day 6 of #OpenSourceWeek, DeepSeek presented an in-depth overview of the DeepSeek-V3/R1 inference system. This article will dig into the system’s design principles, optimization strategies, and performance statistics, highlighting the significant advancements made in throughput and latency optimization.

Table of contents

  • System Design Principles
  • Prefilling and Decoding Phases
  • Diagram of DeepSeek’s Online Inference System
  • Performance Statistics
  • Conclusion

System Design Principles

The primary objectives of the DeepSeek-V3/ DeepSeek R1 inference system are to achieve higher throughput and lower latency. To meet these goals, they have implemented a sophisticated architecture that leverages cross-node Expert Parallelism (EP). This approach not only enhances the efficiency of GPU matrix computations but also optimizes the overall system performance.

Expert Parallelism (EP)

  • Batch Size Scaling: EP allows for significant scaling of the batch size, which is crucial for maximizing GPU utilization and throughput.
  • Memory Access Reduction: By distributing experts across multiple GPUs, each GPU processes only a small subset of experts, which reduces memory access demands and consequently lowers latency.

However, the implementation of EP introduces complexities, particularly in terms of cross-node communication and the need for effective load balancing across different Data Parallelism (DP) instances.

Addressing Challenges of EP

To tackle these challenges, they focused on three key strategies:

  • Scaling Batch Size: By ensuring a sufficiently large overall batch size, it can maintain high throughput and low latency, even with the model’s inherent sparsity.
  • Hiding Communication Latency: They employ a dual-batch overlap strategy during the prefill and decode phases, allowing them to execute microbatches alternately and hide communication costs behind computation.
  • Load Balancing: They strive to balance computational and communication loads across all GPUs to prevent any single GPU from becoming a bottleneck.

Prefilling and Decoding Phases

The architecture of DeepSeek-V3/R1 employs different degrees of parallelism during the prefill and decode phases:

  • Prefilling Phase: Utilizes Routed Expert EP32 and MLA/Shared Expert DP32, with each deployment unit spanning 4 nodes and 32 redundant routed experts.
  • Decoding Phase: Employs Routed Expert EP144 and MLA/Shared Expert DP144, with each deployment unit spanning 18 nodes.

Communication-Computation Overlapping

To optimize throughput, they have developed a communication-computation overlapping mechanism. During the prefilling phase, it alternates between two microbatches, allowing the communication cost of one microbatch to be hidden behind the computation of the other. In the decoding phase, it subdivides the attention layer into two steps and utilizes a 5-stage pipeline to achieve seamless overlapping.

? Day 6 of #OpenSourceWeek: One More Thing – DeepSeek-V3/R1 Inference System Overview

Optimized throughput and latency via:
? Cross-node EP-powered batch scaling
? Computation-communication overlap
⚖️ Load balancing

Statistics of DeepSeek's Online Service:
⚡ 73.7k/14.8k…

— DeepSeek (@deepseek_ai) March 1, 2025

Diagram of DeepSeek’s Online Inference System

DeepSeek #OpenSourceWeek Day 6: Inference System Overview

This diagram depicts a system with two main components: Prefill and Decode services, each managed by load balancers for parallel processing. The API Server directs requests to these services. Both services utilize an optional external key-value cache (KVCache) for storage. The system is designed for efficient and scalable handling of API requests through parallel processing and caching.

Performance Statistics

The performance of the DeepSeek-V3/R1 inference system has been impressive. Over 24 hours, the system achieved the following statistics:

DeepSeek #OpenSourceWeek Day 6: Inference System Overview

  • Total Input Tokens: 608 billion, with 342 billion (56.3%) hitting the on-disk KV cache.
  • Total Output Tokens: 168 billion, with an average output speed of 20–22 tokens per second.
  • Average Throughput: Each H800 node delivered approximately 73.7k tokens/s for input and 14.8k tokens/s for output.

Cost and Revenue Analysis

The operational costs and revenue generated by the DeepSeek-V3/R1 system are noteworthy. The total daily cost for running the inference services, assuming a leasing cost of $2 per hour per H800 GPU, amounted to $87,072.

If all tokens were billed at DeepSeek-R1’s pricing, the theoretical total daily revenue would be $562,027, resulting in a remarkable cost profit margin of 545%. The pricing structure is as follows:

  • R1 Pricing:
    • $0.14/M for input tokens (cache hit)
    • $0.55/M for input tokens (cache miss)
    • $2.19/M for output tokens

However, actual revenue is lower due to several factors:

  • DeepSeek-V3’s pricing is significantly lower than R1.
  • Only a subset of services are monetized, with web and app access remaining free.
  • Nighttime discounts are applied during off-peak hours.

DeepSeek #OpenSourceWeek Day 6: Inference System Overview

Graph Overview

  • The Graph Displays Two Datasets: Cost (in yellow) and Theoretical Income (in blue) over 24 hours, from 12:00 to 12:00.
  • Data Trends: Theoretical income shows significant peaks during certain hours, indicating higher potential earnings, while costs remain relatively stable and low in comparison.
  • Time Analysis: Cost remains consistently low, suggesting efficient operations, while theoretical income fluctuates, hinting at varying levels of engagement or activity.

Notes: The theoretical income is based on API pricing calculations and does not reflect actual earnings.

For detailed analysis, please refer to the GitHub link of day 6 GitHub.

Previous Updates:

  • Day 1: Release of FlashMLA
  • Day 2: Release of DeepEP
  • Day 3: Release of DeepGEMM
  • Day 4: Optimized Parallelism Strategies
  • Day 5: Launch of 3FS and Smallpond Framework

Conclusion

The DeepSeek-V3/R1 inference system represents a significant advancement in the field of artificial intelligence, particularly in optimizing throughput and latency. Through the innovative use of cross-node Expert Parallelism, effective load balancing, and communication-computation overlapping, we have achieved impressive performance metrics.

As they continue to refine our systems and share insights with the community, they are contributing to the broader goals of artificial general intelligence (AGI). The insights gained from this week will not only enhance our understanding but also pave the way for future innovations in AI technology

They are encouraging the community to engage with these resources, as they provide valuable insights into the ongoing developments in the DeepSeek project and its implications for the future of AI.

The above is the detailed content of DeepSeek #OpenSourceWeek Day 6: Inference System Overview. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Best AI Art Generators (Free & Paid) for Creative Projects Best AI Art Generators (Free & Paid) for Creative Projects Apr 02, 2025 pm 06:10 PM

The article reviews top AI art generators, discussing their features, suitability for creative projects, and value. It highlights Midjourney as the best value for professionals and recommends DALL-E 2 for high-quality, customizable art.

Getting Started With Meta Llama 3.2 - Analytics Vidhya Getting Started With Meta Llama 3.2 - Analytics Vidhya Apr 11, 2025 pm 12:04 PM

Meta's Llama 3.2: A Leap Forward in Multimodal and Mobile AI Meta recently unveiled Llama 3.2, a significant advancement in AI featuring powerful vision capabilities and lightweight text models optimized for mobile devices. Building on the success o

Best AI Chatbots Compared (ChatGPT, Gemini, Claude & More) Best AI Chatbots Compared (ChatGPT, Gemini, Claude & More) Apr 02, 2025 pm 06:09 PM

The article compares top AI chatbots like ChatGPT, Gemini, and Claude, focusing on their unique features, customization options, and performance in natural language processing and reliability.

Is ChatGPT 4 O available? Is ChatGPT 4 O available? Mar 28, 2025 pm 05:29 PM

ChatGPT 4 is currently available and widely used, demonstrating significant improvements in understanding context and generating coherent responses compared to its predecessors like ChatGPT 3.5. Future developments may include more personalized interactions and real-time data processing capabilities, further enhancing its potential for various applications.

Top AI Writing Assistants to Boost Your Content Creation Top AI Writing Assistants to Boost Your Content Creation Apr 02, 2025 pm 06:11 PM

The article discusses top AI writing assistants like Grammarly, Jasper, Copy.ai, Writesonic, and Rytr, focusing on their unique features for content creation. It argues that Jasper excels in SEO optimization, while AI tools help maintain tone consist

Top 7 Agentic RAG System to Build AI Agents Top 7 Agentic RAG System to Build AI Agents Mar 31, 2025 pm 04:25 PM

2024 witnessed a shift from simply using LLMs for content generation to understanding their inner workings. This exploration led to the discovery of AI Agents – autonomous systems handling tasks and decisions with minimal human intervention. Buildin

Choosing the Best AI Voice Generator: Top Options Reviewed Choosing the Best AI Voice Generator: Top Options Reviewed Apr 02, 2025 pm 06:12 PM

The article reviews top AI voice generators like Google Cloud, Amazon Polly, Microsoft Azure, IBM Watson, and Descript, focusing on their features, voice quality, and suitability for different needs.

AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More AV Bytes: Meta's Llama 3.2, Google's Gemini 1.5, and More Apr 11, 2025 pm 12:01 PM

This week's AI landscape: A whirlwind of advancements, ethical considerations, and regulatory debates. Major players like OpenAI, Google, Meta, and Microsoft have unleashed a torrent of updates, from groundbreaking new models to crucial shifts in le

See all articles