ERNIE-4.5-VL-28B-A3B-PT

reasoning
multimodal

ERNIE-4.5-VL-28B-A3B-PT is a multimodal model from the ERNIE 4.5 family, built on a heterogeneous Mixture-of-Experts (MoE) architecture. It has 28 billion total parameters, with only 3 billion activated per inference pass, ensuring high computational efficiency. A key innovation lies in its modality-specific expert groups: separate experts handle textual and visual inputs, while shared experts and self-attention layers enable effective cross-modal interaction. The model features an adaptive vision encoder that processes images at arbitrary resolutions without distorting their aspect ratio, preserving the original proportions. For video, it employs an adaptive frame sampling strategy with temporal timestamps rendered directly onto frames, enabling precise temporal understanding. It supports a context window of up to 131,072 tokens, allowing it to handle lengthy documents and extended video sequences.

The model offers two operational modes—thinking and non-thinking—making it versatile across diverse tasks. The thinking mode enhances reasoning for complex visual challenges (e.g., STEM, mathematics, puzzles), while non-thinking mode enables rapid processing of simple, routine requests. Compared to the flagship ERNIE-4.5-VL-424B-A47B, this compact 28B-A3B variant exhibits only marginal performance degradation while drastically reducing computational requirements.

The model’s multimodal capabilities enable a wide range of practical applications: its strong performance on OCRBench (885) and DocVQA (94.1) demonstrates effectiveness in processing scanned documents, invoices, and forms; high scores on ChartQA (82.2) and TableVQA (70.0) make it suitable for analyzing charts and tables in financial and scientific data; its video understanding capabilities (MVBench 72.0, VideoMME 74.4, LongVideoBench 62.1) are valuable for security and surveillance systems; and its precise object counting (CountBench 87.6) and visual perception (RealWorldQA 69.2) can be leveraged in retail for inventory management and visual search. Released under the permissive Apache 2.0 license, the model can be freely used in commercial projects without restrictions.


Announce Date: 28.06.2025
Parameters: 29B
Experts: 130
Activated at inference: 3B
Context: 132K
Layers: 28
Attention Type: Full Attention
Developer: Baidu, Inc.
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore ERNIE-4.5-VL-28B-A3B-PT capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting ERNIE-4.5-VL-28B-A3B-PT

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-2.16.32.160
131,072.0
tensor
2 $0.54 1.471 Launch
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 1.471 Launch
rtx2080ti-3.12.24.120
131,072.0
pipeline
3 $0.84 1.242 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 3.528 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
4 $1.12 2.299 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 1.828 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 3.528 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 3.528 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 1.828 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 1.785 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 3.528 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 7.999 Launch
h100-1.16.64.160
131,072.0
1 $3.83 7.999 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 9.799 Launch
h200-1.16.128.160
131,072.0
1 $4.74 15.842 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
131,072.0
pipeline
3 $0.88 1.242 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 1.599 Launch
teslat4-4.16.64.160
131,072.0
tensor
4 $0.96 2.942 Launch
teslaa2-3.32.128.160
131,072.0
pipeline
3 $1.06 1.242 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 1.599 Launch
teslaa2-4.32.128.160
131,072.0
tensor
4 $1.26 2.942 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 1.599 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 1.599 Launch
teslav100-2.16.64.240
131,072.0
tensor
2 $2.22 3.656 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 6.070 Launch
rtx5090-2.16.64.160
131,072.0
tensor
2 $2.93 3.656 Launch
h100-1.16.64.160
131,072.0
1 $3.83 6.070 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 7.870 Launch
h200-1.16.128.160
131,072.0
1 $4.74 13.913 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
131,072.0
pipeline
6 $1.65 1.794 Launch
teslaa10-4.16.128.160
131,072.0
tensor
4 $1.75 2.509 Launch
rtxa5000-4.16.128.160.nvlink
131,072.0
tensor
4 $2.34 2.509 Launch
teslaa100-1.16.128.160
131,072.0
1 $2.50 1.523 Launch
rtx3090-4.16.96.320
131,072.0
tensor
4 $2.97 2.509 Launch
rtx4090-4.16.96.320
131,072.0
tensor
4 $3.68 2.509 Launch
teslav100-3.64.256.320
131,072.0
pipeline
3 $3.89 2.866 Launch
h100-1.16.128.160
131,072.0
1 $3.95 1.523 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 3.323 Launch
rtx5090-3.16.96.160
131,072.0
pipeline
3 $4.34 2.866 Launch
teslav100-4.32.96.160
131,072.0
tensor
4 $4.35 6.623 Launch
h200-1.16.128.160
131,072.0
1 $4.74 9.366 Launch
rtx5090-4.16.128.160
131,072.0
tensor
4 $5.74 6.623 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.