DeepSeek-R1 is the first generation of reasoning models developed by DeepSeek-AI and released on January 20, 2025. The model is built upon large-scale reinforcement learning (RL) training and demonstrates outstanding capabilities in solving complex tasks such as mathematics, programming, and scientific reasoning.
DeepSeek-R1 supports long chain-of-thought (CoT) generation, including self-checking, reflection, and alternative approaches to problem-solving. It achieves performance comparable to OpenAI-o1-1217 on benchmarks such as AIME 2024 (79.8%) and MATH-500 (97.3%).
The base version of DeepSeek-R1 contains 671 billion parameters and is highly resource-intensive. However, compact versions of the model are also available (1.5B, 7B, 8B, 14B, 32B, 70B), along with distilled versions derived from DeepSeek-R1 based on Qwen and Llama. As a result, DeepSeek-R1 sets a new standard in the field of reasoning models by combining the power of large-scale RL training with practical applicability, making it one of the best among open-source models.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.