DeepSeek-V4-Pro

reasoning
coding

DeepSeek-V4-Pro represents a fundamental step forward in open large language model (LLM) design, delivering unprecedented efficiency when working with massive data volumes — supporting context lengths of up to 1 million tokens. Built on a Mixture-of-Experts (MoE) architecture, the model contains 1.6 trillion total parameters, yet activates only 49 billion parameters per generated token. The key innovation and advantage of V4-Pro over previous versions (including DeepSeek-V3.2) and competing solutions lies in a radical reduction in computational cost, making the use of ultra-long contexts both practical and economically viable.

At the heart of V4-Pro’s computational efficiency is a departure from uniform context compression in favor of an innovative “hybrid attention” mechanism. Different groups of layers in the model employ two new mechanisms: Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA). In CSA mode, the model compresses the KV-cache by packing 4 original tokens into 1 vector, after which a lightweight DSA indexer (Lightning Indexer) selects only the most relevant blocks from the entire history for computation. In HCA mode, extreme compression with a ratio of 1:128 is applied. Thanks to this dense packing of data, the model can afford to perform full (non-sparse) global attention over all tokens in the history simultaneously. Importantly, in both modes a local sliding window mechanism operates in parallel. It processes the immediately preceding tokens without any compression, ensuring that the model never loses a precise connection to the current context.

Training DeepSeek-V4-Pro required the adoption of a number of advanced engineering practices. The model was pretrained on more than 32 trillion high-quality tokens using the Muon optimizer, which provides enhanced stability at such scales. To prevent the signal from fading as it passes through hundreds of layers, manifold-constrained hyper-connections (mHC) technology was introduced to strengthen residual connections. The model uses mixed-precision computation: expert weights are stored in the ultra-compact FP4 format, while the remaining parameters use FP8, somewhat reducing hardware requirements.

On key benchmarks, DeepSeek-V4-Pro confidently ranks among the leaders for both open and closed models, and on a number of benchmarks it surpasses proprietary flagships.

The model provides three operating modes: “Non-think” for lightning-fast responses, “Think High” for standard reasoning, and “Think Max” for recursive analysis of the most complex tasks.

Use cases for DeepSeek-V4-Pro cover analysis and synthesis of information from ultra-long documents (legal, scientific literature reviews, financial reports, technical documentation), software development (code completion, refactoring, generation of complex algorithms), as well as agentic workflows requiring the storage of tool call history and multi-step reasoning chains. Beyond this, V4-Pro is positioned as an ideal tool for scientific research in AI, engineering, mathematics, and other fields.


Announce Date: 22.04.2026
Parameters: 2T
Experts: 385
Activated at inference: 49B
Context: 1049K
Layers: 61
Attention Type: DeepSeek Sparse Attention
Developer: DeepSeek
Transformers Version: 4.57.1
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore DeepSeek-V4-Pro capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting DeepSeek-V4-Pro

Prices:
Name GPU Price, hour TPS Max Concurrency
h200-8.52.1024.960
1,048,576.0
tensor
8 $37.37 3.065 Launch
h200-8.52.1024.960.nvlink
1,048,576.0
tensor
8 $37.37 3.065 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
h200-8.52.1024.960
1,048,576.0
tensor
8 $37.37 2.330 Launch
h200-8.52.1024.960.nvlink
1,048,576.0
tensor
8 $37.37 2.330 Launch
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.