Qwen3-VL-4B-Instruct

multimodal

Qwen3-VL-4B-Instruct is a compact 4-billion-parameter multimodal model designed for efficient deployment on resource-constrained servers while retaining the full functionality of the Qwen3-VL series. Despite being half the size of the 8B version, the model preserves all key architectural innovations: Interleaved-MRoPE for video understanding, DeepStack for multi-level visual feature fusion, and Text-Timestamp Alignment for precise temporal localization. The seamless integration of text and visual modalities provides an understanding of multimodal context at a level comparable to pure-text LLMs.

In terms of performance, Qwen3-VL-4B-Instruct approaches the results of Qwen2.5-VL-7B, demonstrating that the reduction in model size was achieved without significant loss of quality. The model supports a native context of 256K tokens (expandable to 1M), enabling the processing of long documents, multi-hour videos, and complex multimodal dialogues. Advanced OCR capabilities with support for 32 languages and resilience to challenging shooting conditions make the 4B model a full-fledged solution for intelligent document processing tasks, despite its compact size.

Qwen3-VL-4B-Instruct represents an ideal solution for scenarios requiring a balance between performance and efficiency: deployment on consumer devices, the ability to process large volumes of visual content, fast response times for integration into real-time applications, and research projects. Furthermore, the open Apache 2.0 license allows for free commercial use of the model, making it accessible to a wide range of users, from startups to large enterprises.


Announce Date: 15.10.2025
Parameters: 4B
Context: 263K
Layers: 36
Attention Type: Full Attention
VRAM requirements: 42.0 GB using 4 bits quantization
Developer: Qwen
Transformers Version: 4.57.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-VL-4B-Instruct capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen3-VL-4B-Instruct

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-4.16.64.160
262,144.0
16 65536 160 4 $0.96 Launch
teslaa2-4.32.128.160
262,144.0
32 131072 160 4 $1.26 Launch
teslaa10-3.16.96.160
262,144.0
16 98304 160 3 $1.34 Launch
teslav100-2.16.64.240
262,144.0
16 65535 240 2 $2.22 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
16 131072 160 4 $2.34 Launch
rtx3090-3.16.96.160
262,144.0
16 98304 160 3 $2.45 Launch
teslaa100-1.16.64.160
262,144.0
16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160
262,144.0
16 65536 160 2 $2.93 Launch
rtx4090-3.16.96.160
262,144.0
16 98304 160 3 $3.23 Launch
teslah100-1.16.64.160
262,144.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
262,144.0
16 131072 160 1 $6.98 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-4.16.64.160
262,144.0
16 65536 160 4 $0.96 Launch
teslaa2-4.32.128.160
262,144.0
32 131072 160 4 $1.26 Launch
teslaa10-3.16.96.160
262,144.0
16 98304 160 3 $1.34 Launch
teslav100-2.16.64.240
262,144.0
16 65535 240 2 $2.22 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
16 131072 160 4 $2.34 Launch
rtx3090-3.16.96.160
262,144.0
16 98304 160 3 $2.45 Launch
teslaa100-1.16.64.160
262,144.0
16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160
262,144.0
16 65536 160 2 $2.93 Launch
rtx4090-3.16.96.160
262,144.0
16 98304 160 3 $3.23 Launch
teslah100-1.16.64.160
262,144.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
262,144.0
16 131072 160 1 $6.98 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-4.16.64.160
262,144.0
16 65536 160 4 $0.96 Launch
teslaa2-4.32.128.160
262,144.0
32 131072 160 4 $1.26 Launch
teslaa10-3.16.96.160
262,144.0
16 98304 160 3 $1.34 Launch
teslav100-2.16.64.240
262,144.0
16 65535 240 2 $2.22 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
16 131072 160 4 $2.34 Launch
rtx3090-3.16.96.160
262,144.0
16 98304 160 3 $2.45 Launch
teslaa100-1.16.64.160
262,144.0
16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160
262,144.0
16 65536 160 2 $2.93 Launch
rtx4090-3.16.96.160
262,144.0
16 98304 160 3 $3.23 Launch
teslah100-1.16.64.160
262,144.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
262,144.0
16 131072 160 1 $6.98 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.