ERNIE-4.5-VL-28B-A3B-PT

reasoning
multimodal

ERNIE-4.5-VL-28B-A3B-PT is a multimodal model from the ERNIE 4.5 family, built on a heterogeneous Mixture-of-Experts (MoE) architecture. It has 28 billion total parameters, with only 3 billion activated per inference pass, ensuring high computational efficiency. A key innovation lies in its modality-specific expert groups: separate experts handle textual and visual inputs, while shared experts and self-attention layers enable effective cross-modal interaction. The model features an adaptive vision encoder that processes images at arbitrary resolutions without distorting their aspect ratio, preserving the original proportions. For video, it employs an adaptive frame sampling strategy with temporal timestamps rendered directly onto frames, enabling precise temporal understanding. It supports a context window of up to 131,072 tokens, allowing it to handle lengthy documents and extended video sequences.

The model offers two operational modes—thinking and non-thinking—making it versatile across diverse tasks. The thinking mode enhances reasoning for complex visual challenges (e.g., STEM, mathematics, puzzles), while non-thinking mode enables rapid processing of simple, routine requests. Compared to the flagship ERNIE-4.5-VL-424B-A47B, this compact 28B-A3B variant exhibits only marginal performance degradation while drastically reducing computational requirements.

The model’s multimodal capabilities enable a wide range of practical applications: its strong performance on OCRBench (885) and DocVQA (94.1) demonstrates effectiveness in processing scanned documents, invoices, and forms; high scores on ChartQA (82.2) and TableVQA (70.0) make it suitable for analyzing charts and tables in financial and scientific data; its video understanding capabilities (MVBench 72.0, VideoMME 74.4, LongVideoBench 62.1) are valuable for security and surveillance systems; and its precise object counting (CountBench 87.6) and visual perception (RealWorldQA 69.2) can be leveraged in retail for inventory management and visual search. Released under the permissive Apache 2.0 license, the model can be freely used in commercial projects without restrictions.


Announce Date: 28.06.2025
Parameters: 29B
Experts: 130
Activated at inference: 3B
Context: 131K
Layers: 28
Attention Type: Full Attention
VRAM requirements: 23.0 GB using 4 bits quantization
Developer: Baidu, Inc.
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore ERNIE-4.5-VL-28B-A3B-PT capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting ERNIE-4.5-VL-28B-A3B-PT

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-2.16.32.160
131,072.0
tensor
16 32768 160 2 $0.54 Launch
teslaa2-2.16.32.160
131,072.0
tensor
16 32768 160 2 $0.57 Launch
rtx2080ti-3.12.24.120
131,072.0
pipeline
12 24576 120 3 $0.84 Launch
teslaa10-2.16.64.160
131,072.0
tensor
16 65536 160 2 $0.93 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
16 32768 160 4 $1.12 Launch
teslav100-1.12.64.160
131,072.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
16 65536 160 2 $1.23 Launch
rtx5090-1.16.64.160
131,072.0
16 65536 160 1 $1.59 Launch
rtx3090-2.16.64.160
131,072.0
tensor
16 65536 160 2 $1.67 Launch
rtx3080-4.16.64.160
131,072.0
tensor
16 65536 160 4 $1.82 Launch
rtx4090-2.16.64.160
131,072.0
tensor
16 65536 160 2 $2.19 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.37 Launch
teslah100-1.16.64.160
131,072.0
16 65536 160 1 $3.83 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $4.74 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-3.32.64.160
131,072.0
pipeline
32 65536 160 3 $0.88 Launch
teslaa10-2.16.64.160
131,072.0
tensor
16 65536 160 2 $0.93 Launch
teslat4-4.16.64.160
131,072.0
tensor
16 65536 160 4 $0.96 Launch
teslaa2-3.32.128.160
131,072.0
pipeline
32 131072 160 3 $1.06 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
16 65536 160 2 $1.23 Launch
teslaa2-4.32.128.160
131,072.0
tensor
32 131072 160 4 $1.26 Launch
rtx3090-2.16.64.160
131,072.0
tensor
16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160
131,072.0
tensor
16 65536 160 2 $2.19 Launch
teslav100-2.16.64.240
131,072.0
tensor
16 65535 240 2 $2.22 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.37 Launch
rtx5090-2.16.64.160
131,072.0
tensor
16 65536 160 2 $2.93 Launch
teslah100-1.16.64.160
131,072.0
16 65536 160 1 $3.83 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $4.74 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa2-6.32.128.160
131,072.0
pipeline
32 131072 160 6 $1.65 Launch
teslaa10-4.16.128.160
131,072.0
tensor
16 131072 160 4 $1.75 Launch
rtxa5000-4.16.128.160.nvlink
131,072.0
tensor
16 131072 160 4 $2.34 Launch
teslaa100-1.16.128.160
131,072.0
16 131072 160 1 $2.50 Launch
rtx3090-4.16.96.320
131,072.0
tensor
16 98304 320 4 $3.18 Launch
teslav100-3.64.256.320
131,072.0
pipeline
64 262144 320 3 $3.89 Launch
teslah100-1.16.128.160
131,072.0
16 131072 160 1 $3.95 Launch
rtx4090-4.16.96.320
131,072.0
tensor
16 98304 320 4 $4.22 Launch
rtx5090-3.16.96.160
131,072.0
pipeline
16 98304 160 3 $4.34 Launch
teslav100-4.32.96.160
131,072.0
tensor
32 98304 160 4 $4.35 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $4.74 Launch
rtx5090-4.16.128.160
131,072.0
tensor
16 131072 160 4 $5.74 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.