DeepSeek-V3.1 — a major update in DeepSeek-AI’s model lineup. According to developers, “This is a step toward the era of agents.” The key feature of DeepSeek-V3.1 is its hybrid reasoning system, enabling the model to switch between two modes: *thinking mode* (reasoning with chain-of-thought) and *non-thinking mode* (direct response generation). Built on a Mixture-of-Experts (MoE) architecture, the model has 671 billion total parameters, but only 37 billion parameters are activated per token during inference, ensuring an optimal balance between performance and inference cost.
The model underwent intensive two-phase post-training for long-context handling. This includes training on 630 billion tokens in the first phase (context length: 32K)—ten times more than V3—and 209 billion tokens in the second phase (context length: 128K)—3.3 times more than its predecessor. As a result, developers recommend using a 128K context window, although the model can technically handle even longer sequences. Notably, the model was trained on FP8-precision data, making it highly optimized for solutions utilizing this quantization format.
On key benchmarks, the new model consistently outperforms previous versions: DeepSeek-V3.1-NonThinking surpasses DeepSeek-V3-0324, while DeepSeek-V3.1-Thinking achieves scores 1–2 percentage points higher than DeepSeek-R1-0528. Moreover, DeepSeek-V3.1 shows dramatic improvements in tool usage and agent-based tasks—especially in non-thinking mode. In terms of reasoning efficiency, DeepSeek-V3.1-Thinking generates reasoning chains significantly faster than its predecessor, DeepSeek-R1-0528.
DeepSeek-AI models have already firmly established themselves in the market as indispensable, all-knowing conversational assistants. DeepSeek-V3.1 takes the baton and opens up new possibilities. In software development, the model not only generates high-quality code but also supports debugging and refactoring, fully compatible with agent frameworks. For scientific research, it assists in analyzing academic papers, interpreting data, and is invaluable in formulating and testing hypotheses. In business analytics, it serves as a powerful tool for complex data analysis and generating reports with actionable recommendations. And this list of use cases and application domains for the new model can go on and on.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.