DeepSeek-V4-Pro represents a fundamental step forward in open large language model (LLM) design, delivering unprecedented efficiency when working with massive data volumes — supporting context lengths of up to 1 million tokens. Built on a Mixture-of-Experts (MoE) architecture, the model contains 1.6 trillion total parameters, yet activates only 49 billion parameters per generated token. The key innovation and advantage of V4-Pro over previous versions (including DeepSeek-V3.2) and competing solutions lies in a radical reduction in computational cost, making the use of ultra-long contexts both practical and economically viable.
At the heart of V4-Pro’s computational efficiency is a departure from uniform context compression in favor of an innovative “hybrid attention” mechanism. Different groups of layers in the model employ two new mechanisms: Compressed Sparse Attention (CSA) and Heavily Compressed Attention (HCA). In CSA mode, the model compresses the KV-cache by packing 4 original tokens into 1 vector, after which a lightweight DSA indexer (Lightning Indexer) selects only the most relevant blocks from the entire history for computation. In HCA mode, extreme compression with a ratio of 1:128 is applied. Thanks to this dense packing of data, the model can afford to perform full (non-sparse) global attention over all tokens in the history simultaneously. Importantly, in both modes a local sliding window mechanism operates in parallel. It processes the immediately preceding tokens without any compression, ensuring that the model never loses a precise connection to the current context.
Training DeepSeek-V4-Pro required the adoption of a number of advanced engineering practices. The model was pretrained on more than 32 trillion high-quality tokens using the Muon optimizer, which provides enhanced stability at such scales. To prevent the signal from fading as it passes through hundreds of layers, manifold-constrained hyper-connections (mHC) technology was introduced to strengthen residual connections. The model uses mixed-precision computation: expert weights are stored in the ultra-compact FP4 format, while the remaining parameters use FP8, somewhat reducing hardware requirements.
On key benchmarks, DeepSeek-V4-Pro confidently ranks among the leaders for both open and closed models, and on a number of benchmarks it surpasses proprietary flagships.
The model provides three operating modes: “Non-think” for lightning-fast responses, “Think High” for standard reasoning, and “Think Max” for recursive analysis of the most complex tasks.
Use cases for DeepSeek-V4-Pro cover analysis and synthesis of information from ultra-long documents (legal, scientific literature reviews, financial reports, technical documentation), software development (code completion, refactoring, generation of complex algorithms), as well as agentic workflows requiring the storage of tool call history and multi-step reasoning chains. Beyond this, V4-Pro is positioned as an ideal tool for scientific research in AI, engineering, mathematics, and other fields.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
1,048,576.0 tensor |
8 | $37.37 | 3.065 | Launch | ||
1,048,576.0 tensor |
8 | $37.37 | 3.065 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
1,048,576.0 tensor |
8 | $37.37 | 2.330 | Launch | ||
1,048,576.0 tensor |
8 | $37.37 | 2.330 | Launch | ||
There are no configurations for this model, context and quantization yet.
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.