DeepSeek-V3.2

reasoning

DeepSeek-V3.2 is built upon DeepSeek-V3.1-Terminus with a single architectural change—the implementation of the DeepSeek Sparse Attention (DSA) mechanism. DSA consists of two key components: a lightning indexer and fine-grained token selection. The indexer calculates importance scores between the current query token and previous tokens, after which the selection mechanism extracts only the top-k key tokens (2048 from the entire sequence). DSA is implemented on top of the Multi-head Latent Attention (MLA) architecture and addresses the problem of losing relevance information for individual tokens. Together, these two methods reduce computational complexity and enable efficient work with long contexts while saving memory allocated for the KV cache.

The key innovation of DeepSeek-V3.2 is its ability to perform reasoning directly within the tool-calling process. The model uses a specialized context management system: the historical reasoning process is preserved between tool calls and is only deleted when a new user message arrives, preventing excessive re-reasoning over the same problem. To train this capability, a large-scale dataset with agent tasks was developed.

DeepSeek-V3.2 demonstrates performance comparable to GPT-5, outperforming open models on key benchmarks. On AIME 2025 (a mathematical olympiad), the model achieved 93.1% accuracy; on HMMT February 2025 (MIT-Harvard math tournament)—92.5%, which is close to GPT-5 results and ahead of Kimi-K2-Thinking. In the Codeforces rating (competitive programming), the model reached 2386, surpassing Claude 4.5 Sonnet (1480). In agent tasks, DeepSeek-V3.2 showed results on par with the best proprietary projects while significantly outperforming other open models: on SWE-Verified (real-world code correction tasks)—73.1% of tasks solved. On the Tool-Decathlon benchmark (diverse tools), the model achieved 35.2% compared to 17.6% for Kimi-K2 and 16.0% for MiniMax-M2.

DeepSeek-V3.2 is optimal for tasks requiring complex reasoning and tool usage: developing and debugging code in real repositories, creating agents for information retrieval with fact verification via web search, data interpretation via code, task automation via MCP (Model Context Protocol), and working with RAG systems.


Announce Date: 01.12.2025
Parameters: 685B
Experts: 256
Activated at inference: 37B
Context: 164K
Layers: 61
Attention Type: DeepSeek Sparse Attention
VRAM requirements: 328.7 GB using 4 bits quantization
Developer: DeepSeek
Transformers Version: 4.44.2
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore DeepSeek-V3.2 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting DeepSeek-V3.2

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-6.44.512.480.nvlink
163,840.0
pipeline
44 524288 480 6 $14.10 Launch
h200-3.32.512.480
163,840.0
pipeline
32 524288 480 3 $14.36 Launch
teslaa100-8.44.512.480.nvlink
163,840.0
tensor
44 524288 480 8 $18.35 Launch
h200-4.32.768.480
163,840.0
tensor
32 786432 480 4 $19.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
h200-6.52.896.960
163,840.0
pipeline
52 917504 960 6 $28.39 Launch
h200-8.52.1024.960
163,840.0
tensor
52 1048576 960 8 $37.37 Launch
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.