Qwen3.5-122B-A10B

reasoning
multimodal

The Qwen3.5-122B-A10B is the second most powerful model in the new Qwen 3.5 lineup, designed to tackle complex research and industrial challenges. Its architecture comprises 48 layers with hybrid attention: blocks of three Gated DeltaNet layers interleaved with one Gated Attention layer (in a 3:1 ratio), each augmented by a sparse Mixture of Experts (MoE) block containing 256 experts. The model activates only 8 of these plus one shared expert (totaling 10B parameters), and its native context of 262,144 tokens can be extended to 1 million, enabling the processing of entire books or massive logs.

The model's uniqueness lies in its native multimodality—it was trained with early fusion of visual and textual data, allowing it to proficiently process images, documents, and videos. Compared to the previous Qwen3 version, the 3.5 model features an enhanced thinking mode with adaptive switching between deep reasoning and quick responses.

On benchmarks, the model demonstrates leading results. In the general knowledge test MMLU-Pro (86.7), it surpasses Qwen3-235B-A22B (84.4) and competitors like GPT-OSS-120B (80.8). It also achieves excellent scores in complex reasoning on GPQA Diamond (86.6) and scientific reasoning on SuperGPQA (67.1). In programming, especially in agentic scenarios (BFCL-V4 – 72.2, TAU2-Bench – 79.5), the model outperforms many specialized competitors. Its multimodal capabilities are robust: in diagram understanding tests like MathVision (86.2) and complex visual reasoning on MMMU-Pro (76.9), the model significantly advances beyond previous versions and solutions from other developers, such as Claude-Sonnet-4.5.

The model is fully capable of serving as the "engine" for enterprise-level projects with reasonable infrastructure requirements. It is the ideal choice for large corporations and research institutions tackling tasks that demand deep data analysis, complex software development, cutting-edge multimodal agent creation, and automation systems where high precision and depth of understanding are critically important.


Announce Date: 24.02.2026
Parameters: 126B
Experts: 256
Activated at inference: 10B
Context: 263K
Layers: 48, using full attention: 12
Attention Type: Linear Attention
Developer: Qwen
Transformers Version: 4.57.0.dev0
vLLM Version: 0.17.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3.5-122B-A10B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3.5-122B-A10B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 1.570 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.570 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.570 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.570 Launch
teslav100-3.64.256.320
262,144.0
pipeline
3 $3.89 1.977 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 2.498 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.977 Launch
teslav100-4.32.96.160
262,144.0
tensor
4 $4.35 6.257 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 11.759 Launch
h200-1.16.128.160
262,144.0
1 $4.74 9.382 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 6.257 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 11.759 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-2.24.256.240
262,144.0
tensor
2 $4.93 3.348 Launch
rtx4090-8.44.256.240
262,144.0
tensor
8 $7.52 5.594 Launch
h100-2.24.256.240
262,144.0
tensor
2 $7.85 3.348 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 7.450 Launch
rtx5090-6.44.256.240
262,144.0
pipeline
6 $8.86 6.408 Launch
h200-2.24.256.240
262,144.0
tensor
2 $9.41 21.219 Launch
rtx5090-8.44.256.240
262,144.0
tensor
8 $11.55 14.969 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.16.256.480
262,144.0
tensor
4 $9.17 7.326 Launch
h200-2.24.256.320
262,144.0
tensor
2 $9.42 2.573 Launch
h100nvl-3.24.384.480
262,144.0
pipeline
3 $12.38 2.166 Launch
h100-4.16.256.480
262,144.0
tensor
4 $14.99 7.326 Launch
h100nvl-4.32.384.480
262,144.0
tensor
4 $16.23 15.529 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.