DeepSeek-V3.2 is built upon DeepSeek-V3.1-Terminus with a single architectural change—the implementation of the DeepSeek Sparse Attention (DSA) mechanism. DSA consists of two key components: a lightning indexer and fine-grained token selection. The indexer calculates importance scores between the current query token and previous tokens, after which the selection mechanism extracts only the top-k key tokens (2048 from the entire sequence). DSA is implemented on top of the Multi-head Latent Attention (MLA) architecture and addresses the problem of losing relevance information for individual tokens. Together, these two methods reduce computational complexity and enable efficient work with long contexts while saving memory allocated for the KV cache.
The key innovation of DeepSeek-V3.2 is its ability to perform reasoning directly within the tool-calling process. The model uses a specialized context management system: the historical reasoning process is preserved between tool calls and is only deleted when a new user message arrives, preventing excessive re-reasoning over the same problem. To train this capability, a large-scale dataset with agent tasks was developed.
DeepSeek-V3.2 demonstrates performance comparable to GPT-5, outperforming open models on key benchmarks. On AIME 2025 (a mathematical olympiad), the model achieved 93.1% accuracy; on HMMT February 2025 (MIT-Harvard math tournament)—92.5%, which is close to GPT-5 results and ahead of Kimi-K2-Thinking. In the Codeforces rating (competitive programming), the model reached 2386, surpassing Claude 4.5 Sonnet (1480). In agent tasks, DeepSeek-V3.2 showed results on par with the best proprietary projects while significantly outperforming other open models: on SWE-Verified (real-world code correction tasks)—73.1% of tasks solved. On the Tool-Decathlon benchmark (diverse tools), the model achieved 35.2% compared to 17.6% for Kimi-K2 and 16.0% for MiniMax-M2.
DeepSeek-V3.2 is optimal for tasks requiring complex reasoning and tool usage: developing and debugging code in real repositories, creating agents for information retrieval with fact verification via web search, data interpretation via code, task automation via MCP (Model Context Protocol), and working with RAG systems.
| Model Name | Context | Type | GPU | TPS | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
163,840.0 pipeline |
44 | 524288 | 480 | 6 | $14.10 | Launch | |
163,840.0 pipeline |
32 | 524288 | 480 | 3 | $14.36 | Launch | |
163,840.0 tensor |
44 | 524288 | 480 | 8 | $18.35 | Launch | |
163,840.0 tensor |
32 | 786432 | 480 | 4 | $19.23 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
163,840.0 pipeline |
52 | 917504 | 960 | 6 | $28.39 | Launch | |
163,840.0 tensor |
52 | 1048576 | 960 | 8 | $37.37 | Launch | |
There are no configurations for this model, context and quantization yet.
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.