DeepSeek-R1

reasoning

DeepSeek-R1 is the first generation of reasoning models developed by DeepSeek-AI and released on January 20, 2025. The model is built upon large-scale reinforcement learning (RL) training and demonstrates outstanding capabilities in solving complex tasks such as mathematics, programming, and scientific reasoning.

DeepSeek-R1 supports long chain-of-thought (CoT) generation, including self-checking, reflection, and alternative approaches to problem-solving. It achieves performance comparable to OpenAI-o1-1217 on benchmarks such as AIME 2024 (79.8%) and MATH-500 (97.3%).

The base version of DeepSeek-R1 contains 671 billion parameters and is highly resource-intensive. However, compact versions of the model are also available (1.5B, 7B, 8B, 14B, 32B, 70B), along with distilled versions derived from DeepSeek-R1 based on Qwen and Llama. As a result, DeepSeek-R1 sets a new standard in the field of reasoning models by combining the power of large-scale RL training with practical applicability, making it one of the best among open-source models.


Announce Date: 20.01.2025
Parameters: 671B
Experts: 256
Activated at inference: 37B
Context: 164K
Layers: 61
Attention Type: Multi-head Latent Attention
VRAM requirements: 325.7 GB using 4 bits quantization
Developer: DeepSeek
Transformers Version: 4.46.3
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore DeepSeek-R1 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting DeepSeek-R1

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-6.44.512.480.nvlink
163,840.0
44 524288 480 6 $15.37 Launch
h200-3.32.512.480
163,840.0
32 524288 480 3 $21.08 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
h200-6.52.896.960
163,840.0
52 917504 960 6 $41.82 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
h200-6.52.896.960
163,840.0
52 917504 960 6 $41.82 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.