The Qwen3.5-122B-A10B is the second most powerful model in the new Qwen 3.5 lineup, designed to tackle complex research and industrial challenges. Its architecture comprises 48 layers with hybrid attention: blocks of three Gated DeltaNet layers interleaved with one Gated Attention layer (in a 3:1 ratio), each augmented by a sparse Mixture of Experts (MoE) block containing 256 experts. The model activates only 8 of these plus one shared expert (totaling 10B parameters), and its native context of 262,144 tokens can be extended to 1 million, enabling the processing of entire books or massive logs.
The model's uniqueness lies in its native multimodality—it was trained with early fusion of visual and textual data, allowing it to proficiently process images, documents, and videos. Compared to the previous Qwen3 version, the 3.5 model features an enhanced thinking mode with adaptive switching between deep reasoning and quick responses.
On benchmarks, the model demonstrates leading results. In the general knowledge test MMLU-Pro (86.7), it surpasses Qwen3-235B-A22B (84.4) and competitors like GPT-OSS-120B (80.8). It also achieves excellent scores in complex reasoning on GPQA Diamond (86.6) and scientific reasoning on SuperGPQA (67.1). In programming, especially in agentic scenarios (BFCL-V4 – 72.2, TAU2-Bench – 79.5), the model outperforms many specialized competitors. Its multimodal capabilities are robust: in diagram understanding tests like MathVision (86.2) and complex visual reasoning on MMMU-Pro (76.9), the model significantly advances beyond previous versions and solutions from other developers, such as Claude-Sonnet-4.5.
The model is fully capable of serving as the "engine" for enterprise-level projects with reasonable infrastructure requirements. It is the ideal choice for large corporations and research institutions tackling tasks that demand deep data analysis, complex software development, cutting-edge multimodal agent creation, and automation systems where high precision and depth of understanding are critically important.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
4 | $1.75 | 1.570 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 1.570 | Launch | ||
262,144.0 tensor |
4 | $2.97 | 1.570 | Launch | ||
262,144.0 tensor |
4 | $3.68 | 1.570 | Launch | ||
262,144.0 pipeline |
3 | $3.89 | 1.977 | Launch | ||
262,144.0 |
1 | $4.11 | 2.498 | Launch | ||
262,144.0 pipeline |
3 | $4.34 | 1.977 | Launch | ||
262,144.0 tensor |
4 | $4.35 | 6.257 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 11.759 | Launch | ||
262,144.0 |
1 | $4.74 | 9.382 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 6.257 | Launch | ||
262,144.0 tensor |
2 | $7.84 | 11.759 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
2 | $4.93 | 3.348 | Launch | ||
262,144.0 tensor |
8 | $7.52 | 5.594 | Launch | ||
262,144.0 tensor |
2 | $7.85 | 3.348 | Launch | ||
262,144.0 tensor |
2 | $8.17 | 7.450 | Launch | ||
262,144.0 pipeline |
6 | $8.86 | 6.408 | Launch | ||
262,144.0 tensor |
2 | $9.41 | 21.219 | Launch | ||
262,144.0 tensor |
8 | $11.55 | 14.969 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
4 | $9.17 | 7.326 | Launch | ||
262,144.0 tensor |
2 | $9.42 | 2.573 | Launch | ||
262,144.0 pipeline |
3 | $12.38 | 2.166 | Launch | ||
262,144.0 tensor |
4 | $14.99 | 7.326 | Launch | ||
262,144.0 tensor |
4 | $16.23 | 15.529 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.