Qwen3.5-35B-A3B

reasoning
multimodal

Qwen3.5-35B-A3B is a mid-sized Mixture-of-Experts (MoE) model with 35 billion total parameters, activating only 3 billion per token. The model comprises 40 layers with a hidden size of 2048 and utilizes tokenization with a notably large vocabulary size of 248,320. Its hybrid attention architecture combines Gated DeltaNet layers (linear attention) for fast processing of long sequences and Gated Attention layers (full attention) for precise contextual understanding. This enables the model to support a native context window of 262,144 tokens without quality degradation. Vision-language capabilities are integrated through early-fusion training, providing better image understanding compared to the Qwen3-VL series. The model supports two operational modes: Thinking for deep reasoning (mathematics, logic, code) and No-thinking for quick responses to simple queries. Inference is highly optimized; deploying the quantized format requires approximately 22–24 GB of GPU memory.

The model demonstrates impressive results on benchmarks, falling only slightly behind the flagship versions of the series. In language tests such as MMLU-Pro (85.3) and SuperGPQA (63.4), it outperforms larger models from the previous generation. Its agentic capabilities stand out in particular: the TAU2-Bench score (81.2) is the best in the family, indicating excellent proficiency in planning and executing multi-step tasks using tools. In multimodal analysis, it shows results close to top-tier models: MathVision (83.9), MMMU-Pro (75.1), OCRBench (91.0). It is important to note that this model forms the foundation of the Qwen3.5-Flash service.

The model's uniqueness lies in its versatility and efficiency, plus it distinguishes itself from previous versions with a significant leap in agent performance and multimodal understanding. This variant could be an excellent choice for companies developing sophisticated assistants, order processing systems, intelligent RAG systems with vast knowledge bases, and generally for any scenario requiring high-quality context understanding and generation while maintaining controlled, reasonable infrastructure costs.


Announce Date: 24.02.2026
Parameters: 36B
Experts: 256
Activated at inference: 3B
Context: 263K
Layers: 40, using full attention: 10
Attention Type: Hybrid Attention
Mamba Type: Gated Delta Net
Developer: Qwen
Transformers Version: 4.57.0.dev0
vLLM Version: 0.17.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3.5-35B-A3B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3.5-35B-A3B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
262,144.0
pipeline
3 $0.88 3.162 Launch
teslaa10-2.16.64.160
262,144.0
tensor
2 $0.93 3.656 Launch
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 5.514 Launch
teslaa2-3.32.128.160
262,144.0
pipeline
3 $1.06 3.162 Launch
rtx2080ti-4.16.32.160
262,144.0
tensor
4 $1.12 1.956 Launch
teslav100-1.12.64.160
262,144.0
1 $1.20 1.304 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 3.656 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 5.514 Launch
rtx3090-2.16.64.160
262,144.0
tensor
2 $1.56 3.656 Launch
rtx5090-1.16.64.160
262,144.0
1 $1.59 1.304 Launch
rtx3080-4.16.64.160
262,144.0
tensor
4 $1.82 1.244 Launch
rtx4090-2.16.64.160
262,144.0
tensor
2 $1.92 3.656 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 9.843 Launch
h100-1.16.64.160
262,144.0
1 $3.83 9.843 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 12.334 Launch
h200-1.16.128.160
262,144.0
1 $4.74 20.696 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 2.512 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 2.512 Launch
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 4.430 Launch
teslaa10-4.12.48.160
262,144.0
tensor
4 $1.57 8.206 Launch
teslav100-2.16.64.240
262,144.0
tensor
2 $2.22 3.501 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 4.430 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 8.206 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 6.842 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 4.430 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 8.206 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 3.501 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 8.206 Launch
h100-1.16.64.160
262,144.0
1 $3.83 6.842 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 9.332 Launch
h200-1.16.128.160
262,144.0
1 $4.74 17.694 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.128.240
262,144.0
tensor
4 $1.76 1.865 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.865 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.865 Launch
teslav100-3.64.256.320
262,144.0
pipeline
3 $3.89 2.359 Launch
h100nvl-1.16.96.240
262,144.0
1 $4.12 2.992 Launch
rtx5090-3.16.96.240
262,144.0
pipeline
3 $4.35 2.359 Launch
teslav100-4.32.256.320
262,144.0
tensor
4 $4.68 7.558 Launch
h200-1.16.128.240
262,144.0
1 $4.74 11.354 Launch
teslaa100-2.24.256.240
262,144.0
tensor
2 $4.93 14.240 Launch
rtx5090-4.16.128.320
262,144.0
tensor
4 $5.76 7.558 Launch
h100-2.24.256.240
262,144.0
tensor
2 $7.85 14.240 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.