DeepSeek-OCR-2

multimodal

DeepSeek-OCR 2 is a model specifically designed for optical character recognition tasks, offering a fundamentally new approach to visual information processing. Inspired by the cognitive mechanisms of human vision, the authors replace traditional raster scanning of an image (left-to-right, top-to-bottom) with a dynamic, semantically-oriented process. The model's key innovation is the DeepEncoder V2 encoder, which does not merely compress features but endows the system with the ability to causally reorder visual information even before it enters the language decoder.

Architecturally, DeepEncoder V2 is built on the compact language model Qwen2-0.5B, which replaces the CLIP component from the previous version of DeepSeek-OCR. The image processing is a two-stage process: first, a lightweight tokenizer (80M parameters) compresses the image into a sequence of visual tokens, reducing their number by a factor of 16. These tokens are then fed into the Qwen2 encoder. Along with the visual tokens, special trainable prompts called causal flow queries are added to the sequence. The visual tokens interact with each other, while each causal query can "see" all visual tokens and all previous causal queries. This scheme allows the queries to gradually, layer by layer, construct a meaningful sequence of visual elements, similar to how the human eye moves across the logical blocks of a document. Only the output states of these causal queries, which already represent a semantically ordered representation of the image, are passed to the language decoder (DeepSeek-MoE).

In tests, this translates into a performance increase: DeepSeek-OCR 2 demonstrates a 3.73% improvement on the OmniDocBench v1.5 benchmark compared to its predecessor, and also shows high results on the allenai/olmOCR-bench tests, particularly in the "Long Fine-Print Text" (90.7%) and "Mathematical Formulas from arXiv" (82.0%) categories.

Thanks to its architectural features, DeepSeek-OCR 2 opens up a wide range of practical scenarios: from digitizing complex documents (scientific articles, financial reports) while preserving their logical structure, to high-quality data preparation for training large language models, converting millions of scans into clean, machine-readable text. The model also efficiently analyzes images with non-linear layouts (infographics, posters), which allows it to be used for information extraction when working with advertisements and in other marketing scenarios.


Announce Date: 27.01.2026
Parameters: 3B
Experts: 64
Activated at inference: 500M
Context: 9K
Layers: 12
Attention Type: Full Attention
Developer: DeepSeek
Transformers Version: 4.46.3
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore DeepSeek-OCR-2 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting DeepSeek-OCR-2

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 22.406 Launch
rtx2080ti-1.10.16.500
8,192.0
1 $0.38 12.806 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 22.406 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 37.766 Launch
rtx3080-1.16.32.160
8,192.0
1 $0.57 10.886 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 37.766 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 37.766 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 53.126 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 78.513 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 53.126 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 145.286 Launch
h100-1.16.64.160
8,192.0
1 $3.83 145.286 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 172.166 Launch
h200-1.16.128.160
8,192.0
1 $4.74 262.406 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 19.426 Launch
rtx2080ti-1.10.16.500
8,192.0
1 $0.38 9.826 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 19.426 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 34.786 Launch
rtx3080-1.16.32.160
8,192.0
1 $0.57 7.906 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 34.786 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 34.786 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 50.146 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 75.533 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 50.146 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 142.306 Launch
h100-1.16.64.160
8,192.0
1 $3.83 142.306 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 169.186 Launch
h200-1.16.128.160
8,192.0
1 $4.74 259.426 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
8,192.0
1 $0.33 13.466 Launch
rtx2080ti-1.10.16.500
8,192.0
1 $0.38 3.866 Launch
teslaa2-1.16.32.160
8,192.0
1 $0.38 13.466 Launch
teslaa10-1.16.32.160
8,192.0
1 $0.53 28.826 Launch
rtx3080-1.16.32.160
8,192.0
1 $0.57 1.946 Launch
rtx3090-1.16.24.160
8,192.0
1 $0.83 28.826 Launch
rtx4090-1.16.32.160
8,192.0
1 $1.02 28.826 Launch
teslav100-1.12.64.160
8,192.0
1 $1.20 44.186 Launch
rtxa5000-2.16.64.160.nvlink
8,192.0
tensor
2 $1.23 69.572 Launch
rtx5090-1.16.64.160
8,192.0
1 $1.59 44.186 Launch
teslaa100-1.16.64.160
8,192.0
1 $2.37 136.346 Launch
h100-1.16.64.160
8,192.0
1 $3.83 136.346 Launch
h100nvl-1.16.96.160
8,192.0
1 $4.11 163.226 Launch
h200-1.16.128.160
8,192.0
1 $4.74 253.466 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.