Qwen3-VL-235B-A22B-Instruct

multimodal

Qwen3-VL-235B-A22B-Instruct is a flagship open-source multimodal model featuring 236 billion parameters with a Mixture of Experts (MoE) architecture (22 billion active parameters) and a native context length of 256K tokens (extendable up to 1M). The model’s multimodal capabilities are built upon three key architectural innovations that deliver state-of-the-art performance.First, Interleaved-MRoPE (Multi-dimensional Rotary Position Embedding) introduces robust positional embeddings that uniformly distribute frequency information across three dimensions—time, width, and height—dramatically enhancing comprehension of long video sequences and preserving temporal coherence even in hour-long videos. Second, the DeepStack system aggregates multi-level features from Vision Transformers (ViTs), capturing fine-grained visual details and significantly improving image-text alignment accuracy. Third, the Text–Timestamp Alignment mechanism replaces the outdated T-RoPE approach, enabling precise second-level alignment of textual descriptions to specific timestamps, thereby substantially strengthening the model’s ability to localize and interpret events over time.Additionally, the model implements advanced spatial perception, allowing it to assess object positions, viewpoints, and occlusions, resulting in exceptional 2D and 3D grounding capabilities. Together, these components enable deep, seamless integration of text and images, empowering the model to achieve outstanding performance.

Qwen3-VL-235B-A22B-Instruct offers revolutionary visual-agent capabilities, enabling it to operate graphical user interfaces (GUIs) on both PCs and mobile devices—recognizing UI elements, understanding button functions, invoking tools, and even automating routine tasks such as form filling. The model demonstrates exceptional proficiency in programming by generating application code directly from screenshots, design mockups, and video demonstrations, significantly accelerating UI prototyping and development.The model supports OCR across 32 languages and excels at extracting text and analyzing the structure of complex multilingual documents—including PDFs, low-quality images, and documents with unconventional formatting, charts, and tables.

Importantly, this visual extension is built upon Qwen/Qwen3-235B-A22B-Instruct-2507—the best open-source model in the Qwen3 series—providing Qwen3-VL not only with the ability to “see,” but also to deeply understand, reason about, and act upon multimodal inputs.


Announce Date: 23.09.2025
Parameters: 236B
Experts: 128
Activated at inference: 22B
Context: 263K
Layers: 94
Attention Type: Full Attention
VRAM requirements: 178.2 GB using 4 bits quantization
Developer: Qwen
Transformers Version: 4.57.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-VL-235B-A22B-Instruct capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen3-VL-235B-A22B-Instruct

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-3.32.384.240
262,144.0
32 393216 240 3 $8.00 Launch
rtx5090-8.44.256.240
262,144.0
44 262144 240 8 $11.55 Launch
h200-2.24.256.240
262,144.0
24 262144 240 2 $13.89 Launch
teslah100-3.32.384.240
262,144.0
32 393216 240 3 $15.58 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.32.384.320.nvlink
262,144.0
32 393216 320 4 $10.35 Launch
teslah100-4.44.512.320
262,144.0
44 524288 320 4 $20.77 Launch
h200-3.32.512.480
262,144.0
32 524288 480 3 $21.08 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-8.44.704.960.nvlink
262,144.0
44 720896 960 8 $20.48 Launch
h200-6.52.896.640
262,144.0
52 917504 640 6 $41.79 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.