Qwen3-VL-235B-A22B-Instruct

multimodal

Qwen3-VL-235B-A22B-Instruct is a flagship open-source multimodal model featuring 236 billion parameters with a Mixture of Experts (MoE) architecture (22 billion active parameters) and a native context length of 256K tokens (extendable up to 1M). The model’s multimodal capabilities are built upon three key architectural innovations that deliver state-of-the-art performance.First, Interleaved-MRoPE (Multi-dimensional Rotary Position Embedding) introduces robust positional embeddings that uniformly distribute frequency information across three dimensions—time, width, and height—dramatically enhancing comprehension of long video sequences and preserving temporal coherence even in hour-long videos. Second, the DeepStack system aggregates multi-level features from Vision Transformers (ViTs), capturing fine-grained visual details and significantly improving image-text alignment accuracy. Third, the Text–Timestamp Alignment mechanism replaces the outdated T-RoPE approach, enabling precise second-level alignment of textual descriptions to specific timestamps, thereby substantially strengthening the model’s ability to localize and interpret events over time.Additionally, the model implements advanced spatial perception, allowing it to assess object positions, viewpoints, and occlusions, resulting in exceptional 2D and 3D grounding capabilities. Together, these components enable deep, seamless integration of text and images, empowering the model to achieve outstanding performance.

Qwen3-VL-235B-A22B-Instruct offers revolutionary visual-agent capabilities, enabling it to operate graphical user interfaces (GUIs) on both PCs and mobile devices—recognizing UI elements, understanding button functions, invoking tools, and even automating routine tasks such as form filling. The model demonstrates exceptional proficiency in programming by generating application code directly from screenshots, design mockups, and video demonstrations, significantly accelerating UI prototyping and development.The model supports OCR across 32 languages and excels at extracting text and analyzing the structure of complex multilingual documents—including PDFs, low-quality images, and documents with unconventional formatting, charts, and tables.

Importantly, this visual extension is built upon Qwen/Qwen3-235B-A22B-Instruct-2507—the best open-source model in the Qwen3 series—providing Qwen3-VL not only with the ability to “see,” but also to deeply understand, reason about, and act upon multimodal inputs.


Announce Date: 23.09.2025
Parameters: 236B
Experts: 128
Activated at inference: 22B
Context: 263K
Layers: 94
Attention Type: Full Attention
Developer: Qwen
Transformers Version: 4.57.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-VL-235B-A22B-Instruct capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3-VL-235B-A22B-Instruct

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-2.24.256.240
32,000.0
tensor
2 $4.93 0.220 Launch
teslaa100-3.32.384.240
262,144.0
pipeline
3 $7.36 1.699 Launch
rtx4090-8.44.256.240
128,000.0
tensor
8 $7.52 0.514 Launch
h100-2.24.256.240
32,000.0
tensor
2 $7.85 0.220 Launch
h100nvl-2.24.192.240
128,000.0
tensor
2 $8.17 0.756 Launch
rtx5090-6.44.256.240
128,000.0
pipeline
6 $8.86 0.620 Launch
teslaa100-4.16.256.240
262,144.0
tensor
4 $9.14 3.178 Launch
h200-2.24.256.240
262,144.0
tensor
2 $9.41 2.556 Launch
rtx5090-8.44.256.240
262,144.0
tensor
8 $11.55 1.739 Launch
h100-3.32.384.240
262,144.0
pipeline
3 $11.73 1.699 Launch
h100nvl-3.24.384.480
262,144.0
pipeline
3 $12.38 2.503 Launch
h100-4.16.256.240
262,144.0
tensor
4 $14.96 3.178 Launch
h100nvl-4.32.384.480
262,144.0
tensor
4 $16.23 4.250 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.16.256.480
128,000.0
tensor
4 $9.17 1.176 Launch
h200-2.24.256.320
128,000.0
tensor
2 $9.42 0.554 Launch
teslaa100-4.32.384.320.nvlink
262,144.0
tensor
4 $9.50 1.176 Launch
h100nvl-3.24.384.480
128,000.0
pipeline
3 $12.38 0.501 Launch
h200-3.32.512.480
262,144.0
pipeline
3 $14.36 3.201 Launch
h100-4.16.256.480
128,000.0
tensor
4 $14.99 1.176 Launch
h100-4.44.512.320
262,144.0
tensor
4 $15.65 1.176 Launch
h100nvl-4.32.384.480
262,144.0
tensor
4 $16.23 2.248 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 5.848 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-8.44.704.960.nvlink
262,144.0
tensor
8 $18.78 1.793 Launch
h200-4.32.768.640
128,000.0
tensor
4 $19.25 0.550 Launch
h200-6.52.896.640
262,144.0
pipeline
6 $28.36 5.844 Launch
h200-8.52.1024.640
262,144.0
tensor
8 $37.34 11.137 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.