Qwen3-VL-32B-Thinking

reasoning
multimodal

Qwen3-VL-32B-Thinking is a reasoning-optimized version of the 32-billion-parameter model, specifically trained for complex tasks that require deep visual analysis and multi-step logical inference. Architecturally, the model is based on the same three key innovations of Qwen3-VL: Interleaved-MRoPE, DeepStack, and Text-Timestamp Alignment. However, the Thinking version undergoes specialized reinforcement learning aimed at developing its capacity for structured reasoning when working with visual content. This training enables the model not only to recognize visual elements but also to establish causal relationships, formulate and test hypotheses, and build logical arguments based on visual data.

On multimodal reasoning benchmarks, Qwen3-VL-32B-Thinking leads among all open-source and closed models of a similar size across almost all compared criteria and categories. The model natively supports a 256K token context window, expandable to 1M tokens, which is critically important for processing complex research papers, technical documents, or long educational videos while maintaining context for deep analysis. Enhanced support for 32 OCR languages with improved recognition of technical terminology, mathematical formulas, and scientific notations makes the model a universal tool for digitizing archival documents and scientific articles.

The use cases for Qwen3-VL-32B-Thinking span professional and academic fields. Scientific Research & Academia: Benefits from the model's ability to analyze complex experimental data, interpret scientific visualizations, and formulate well-grounded hypotheses based on visual patterns. Educational Platforms: Can leverage the model to generate detailed, step-by-step solutions to complex problems. Medical Diagnostics & Complex Case Analysis: Applicable in scenarios requiring multi-factorial reasoning based on medical images. Financial Analysis & Business Intelligence: Useful for interpreting complex charts, graphs, and data visualizations, where the model can identify trends, anomalies, and formulate forecasts with justifications. Long-Form Video Analysis for Professional Content Analysis: Excels in tasks requiring not just the temporal localization of events but also an understanding of the causal relationships between them, the identification of hidden patterns, and the formulation of analytical conclusions.


Announce Date: 22.10.2025
Parameters: 33B
Context: 263K
Layers: 64
Attention Type: Full Attention
Developer: Qwen
Transformers Version: 4.57.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-VL-32B-Thinking capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3-VL-32B-Thinking

Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
262,144.0
pipeline
6 $3.50 1.472 Launch
teslav100-4.32.96.160
262,144.0
tensor
4 $4.35 1.325 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 26.930 1.853 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 2.069 Launch
h200-1.16.128.160
262,144.0
1 $4.74 1.625 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.325 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
6 $5.83 1.472 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 2.069 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 33.170 1.853 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 2.247 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
262,144.0
pipeline
6 $3.50 1.235 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 1.832 Launch
teslav100-4.32.256.160
262,144.0
tensor
4 $4.66 1.088 Launch
teslaa100-2.24.128.160.nvlink
262,144.0
tensor
2 $4.67 53.940 1.617 Launch
h200-1.16.128.160
262,144.0
1 $4.74 1.388 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.088 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
6 $5.83 1.235 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 1.832 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 54.820 1.617 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 2.010 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 1.344 Launch
teslaa100-2.24.192.160.nvlink
262,144.0
tensor
2 $4.80 36.770 1.129 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 1.344 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 1.129 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 1.523 Launch
rtx5090-6.44.256.160
262,144.0
pipeline
6 $8.86 1.423 Launch
h200-2.24.256.160
262,144.0
tensor
2 $9.40 2.844 Launch
rtx5090-8.44.256.160
262,144.0
tensor
8 $11.54 2.244 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.