DeepSeek-V3.1

reasoning

DeepSeek-V3.1 — a major update in DeepSeek-AI’s model lineup. According to developers, “This is a step toward the era of agents.” The key feature of DeepSeek-V3.1 is its hybrid reasoning system, enabling the model to switch between two modes: *thinking mode* (reasoning with chain-of-thought) and *non-thinking mode* (direct response generation). Built on a Mixture-of-Experts (MoE) architecture, the model has 671 billion total parameters, but only 37 billion parameters are activated per token during inference, ensuring an optimal balance between performance and inference cost.

The model underwent intensive two-phase post-training for long-context handling. This includes training on 630 billion tokens in the first phase (context length: 32K)—ten times more than V3—and 209 billion tokens in the second phase (context length: 128K)—3.3 times more than its predecessor. As a result, developers recommend using a 128K context window, although the model can technically handle even longer sequences. Notably, the model was trained on FP8-precision data, making it highly optimized for solutions utilizing this quantization format.

On key benchmarks, the new model consistently outperforms previous versions: DeepSeek-V3.1-NonThinking surpasses DeepSeek-V3-0324, while DeepSeek-V3.1-Thinking achieves scores 1–2 percentage points higher than DeepSeek-R1-0528. Moreover, DeepSeek-V3.1 shows dramatic improvements in tool usage and agent-based tasks—especially in non-thinking mode. In terms of reasoning efficiency, DeepSeek-V3.1-Thinking generates reasoning chains significantly faster than its predecessor, DeepSeek-R1-0528.

DeepSeek-AI models have already firmly established themselves in the market as indispensable, all-knowing conversational assistants. DeepSeek-V3.1 takes the baton and opens up new possibilities. In software development, the model not only generates high-quality code but also supports debugging and refactoring, fully compatible with agent frameworks. For scientific research, it assists in analyzing academic papers, interpreting data, and is invaluable in formulating and testing hypotheses. In business analytics, it serves as a powerful tool for complex data analysis and generating reports with actionable recommendations. And this list of use cases and application domains for the new model can go on and on.


Announce Date: 21.08.2025
Parameters: 685B
Experts: 256
Activated: 37B
Context: 164K
Attention Type: Multi-head Latent Attention
VRAM requirements: 329.7 GB using 4 bits quantization
Developer: DeepSeek
Transformers Version: 4.44.2
License: MIT

Public endpoint

Use our pre-built public endpoints to test inference and explore DeepSeek-V3.1 capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting DeepSeek-V3.1

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.