DeepSeek-V3

DeepSeek-V3 — introduced on December 26, 2014 by the Chinese company DeepSeek AI, this model brought the company worldwide recognition. It employs a Mixture-of-Experts (MoE) architecture with a total of 671 billion parameters, of which only 37 billion are activated for processing each token. This design ensures high performance and computational efficiency.

The model supports a context window of up to 164 thousand tokens and demonstrates excellent results in text generation, including in Russian. It handles tasks that require analyzing long documents, creating complex content, and working with large volumes of data — for example, automation of documentation, reports, educational materials, or legal consultations.

In benchmarks, DeepSeek-V3 has shown outstanding results, particularly in mathematics and programming, surpassing many open-source models and competing with GPT-4o and Claude 3.5 Sonnet. The model also successfully passed the Needle In A Haystack (NIAH) test at a context length of up to 128K. Thanks to its scalability, quality, and open license, DeepSeek-V3 is a promising choice for large-scale research and commercial projects.


Announce Date: 26.12.2024
Parameters: 671B
Experts: 16
Activated: 37B
Context: 164K
Attention Type: Multi-head Latent Attention
VRAM requirements: 323.2 GB using 4 bits quantization
Developer: DeepSeek
Transformers Version: 4.33.1
Ollama Version: 0.5.5
License: MIT

Public endpoint

Use our pre-built public endpoints to test inference and explore DeepSeek-V3 capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting DeepSeek-V3

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.