Qwen3-VL-235B-A22B-Instruct is a flagship open-source multimodal model featuring 236 billion parameters with a Mixture of Experts (MoE) architecture (22 billion active parameters) and a native context length of 256K tokens (extendable up to 1M). The model’s multimodal capabilities are built upon three key architectural innovations that deliver state-of-the-art performance.First, Interleaved-MRoPE (Multi-dimensional Rotary Position Embedding) introduces robust positional embeddings that uniformly distribute frequency information across three dimensions—time, width, and height—dramatically enhancing comprehension of long video sequences and preserving temporal coherence even in hour-long videos. Second, the DeepStack system aggregates multi-level features from Vision Transformers (ViTs), capturing fine-grained visual details and significantly improving image-text alignment accuracy. Third, the Text–Timestamp Alignment mechanism replaces the outdated T-RoPE approach, enabling precise second-level alignment of textual descriptions to specific timestamps, thereby substantially strengthening the model’s ability to localize and interpret events over time.Additionally, the model implements advanced spatial perception, allowing it to assess object positions, viewpoints, and occlusions, resulting in exceptional 2D and 3D grounding capabilities. Together, these components enable deep, seamless integration of text and images, empowering the model to achieve outstanding performance.
Qwen3-VL-235B-A22B-Instruct offers revolutionary visual-agent capabilities, enabling it to operate graphical user interfaces (GUIs) on both PCs and mobile devices—recognizing UI elements, understanding button functions, invoking tools, and even automating routine tasks such as form filling. The model demonstrates exceptional proficiency in programming by generating application code directly from screenshots, design mockups, and video demonstrations, significantly accelerating UI prototyping and development.The model supports OCR across 32 languages and excels at extracting text and analyzing the structure of complex multilingual documents—including PDFs, low-quality images, and documents with unconventional formatting, charts, and tables.
Importantly, this visual extension is built upon Qwen/Qwen3-235B-A22B-Instruct-2507—the best open-source model in the Qwen3 series—providing Qwen3-VL not only with the ability to “see,” but also to deeply understand, reason about, and act upon multimodal inputs.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
262,144.0 |
32 | 393216 | 240 | 3 | $8.00 | Launch | |
262,144.0 |
44 | 262144 | 240 | 8 | $11.55 | Launch | |
262,144.0 |
24 | 262144 | 240 | 2 | $13.89 | Launch | |
262,144.0 |
32 | 393216 | 240 | 3 | $15.58 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.