DeepSeek-OCR

multimodal

The DeepSeek-OCR model is a unique multimodal visual-language transformer with 570 million active parameters during inference, designed for efficient optical compression of long text contexts into visual tokens. The key innovation of DeepSeek-OCR lies in the understanding that an image containing document text can represent information with significantly fewer tokens than equivalent digital text. Architecturally, DeepSeek-OCR consists of two main components: DeepEncoder and DeepSeek3B-MoE decoder. DeepEncoder processes images, creating a compressed visual representation of text. The DeepSeek-OCR decoder (based on DeepSeek VL2) reconstructs the original text and structured information from visual tokens. This novel approach allows the model to maintain higher quality than larger models despite its compact size and minimal computational overhead, even when using full attention.

DeepSeek-OCR stands out favorably from other state-of-the-art multimodal models by achieving the required OCR quality with 2-10 times fewer tokens, significantly accelerating and simplifying the processing of voluminous text documents or streams of similar documents. In benchmarks, DeepSeek-OCR demonstrates outstanding results. On the Fox 21 benchmark, it achieves decoding accuracy of approximately 97% with text compression into visual tokens at a ratio of 10, surpassing many contemporary OCR and OCR+visual-text models. On OmniDocBench, DeepSeek-OCR occupies leading positions, using only about 100 tokens for images at 640×640 resolution while maintaining recognition and parsing accuracy for complex structures: formulas, tables, charts, etc. For some document categories (e.g., presentations), the model requires fewer than 64 visual tokens for high-quality recognition.

The model is adaptive and supports multiple operating modes (Tiny, Small, Base, Large, Gundam) for different document types. It is ideally suited for large-scale digitization projects of scanned textual information, recognizing multilingual PDFs (with support for about 100 languages), as well as rendering and structural parsing of documents with tables, formulas, charts, and natural images. Developers recommend DeepSeek-OCR for working with historical archives, documents with long contexts, and automating financial processes.


Announce Date: 20.10.2025
Parameters: 3B
Experts: 64
Activated at inference: 570M
Context: 9K
Layers: 12
Attention Type: Full Attention
Developer: DeepSeek
Transformers Version: 4.46.3
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore DeepSeek-OCR capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting DeepSeek-OCR

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 20.608 Launch
rtx2080ti-1.10.16.500
8,192.0
1 $0.38 11.008 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 20.608 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 35.968 Launch
rtx3080-1.16.32.160
8,192.0
1 $0.57 9.088 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 35.968 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 35.968 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 51.328 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 76.715 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 51.328 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 143.488 Launch
h100-1.16.64.160
8,192.0
1 $3.83 143.488 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 170.368 Launch
h200-1.16.128.160
8,192.0
1 $4.74 260.608 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 19.426 Launch
rtx2080ti-1.10.16.500
8,192.0
1 $0.38 9.826 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 19.426 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 34.786 Launch
rtx3080-1.16.32.160
8,192.0
1 $0.57 7.906 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 34.786 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 34.786 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 50.146 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 75.533 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 50.146 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 142.306 Launch
h100-1.16.64.160
8,192.0
1 $3.83 142.306 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 169.186 Launch
h200-1.16.128.160
8,192.0
1 $4.74 259.426 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 11.157 Launch
rtx2080ti-1.10.16.500
8,192.0
1 $0.38 1.557 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 11.157 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 26.517 Launch
rtx3080-1.16.32.160
8,192.0
1 $0.57 -0.363 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 26.517 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 26.517 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 41.877 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 67.264 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 41.877 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 134.037 Launch
h100-1.16.64.160
8,192.0
1 $3.83 134.037 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 160.917 Launch
h200-1.16.128.160
8,192.0
1 $4.74 251.157 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.