DeepSeek-V3.1-Terminus

reasoning

DeepSeek-V3.1-Terminus is a rather unexpected addition to the DeepSeek-V3.1 series, released just one month after the launch of the base version. According to the developers, this update was driven by the need to address issues highlighted in user feedback. In particular, the model has significantly improved its language consistency — it now almost entirely eliminates cases of Chinese-English text mixing and the appearance of random characters. Additionally, agent capabilities have been fundamentally re-engineered: Code Agent now handles programming tasks with higher precision, while Search Agent demonstrates enhanced search efficiency and better integration with external tools.

DeepSeek-V3.1-Terminus delivers impressive results across key evaluation benchmarks, especially in tasks requiring tool usage and agent-based reasoning. On SimpleQA (a test measuring factual accuracy of short answers), the model achieves 96.8%, up from 93.4% in the previous version — ranking among the best results for open models. On SWE Verified (evaluating software engineering capabilities), it scores 68.4%, surpassing many commercial solutions. In reasoning-only mode (without tool usage), it achieves 85.0 on MMLU-Pro (complex academic tasks), 80.7 on GPQA-Diamond (graduate-level questions), and 74.9 on LiveCodeBench (programming tasks).

Thanks to its enhanced agent capabilities and improved linguistic output stability, DeepSeek-V3.1-Terminus is particularly effective for automated software development, enterprise applications, and complex research-oriented tasks.


Announce Date: 22.09.2025
Parameters: 685B
Experts: 256
Activated: 37B
Context: 164K
Layers: 61
Attention Type: Multi-head Latent Attention
VRAM requirements: 329.7 GB using 4 bits quantization
Developer: DeepSeek
Transformers Version: 4.44.2
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore DeepSeek-V3.1-Terminus capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting DeepSeek-V3.1-Terminus

There are no configurations for this model, context and quantization yet.
There are no configurations for this model, context and quantization yet.
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.