Qwen3.6-35B-A3B

reasoning
multimodal
coding

Qwen3.6-35B-A3B is the first open-weight model in the Qwen3.6 series and one of the most practical Qwen releases for developers: it is released under Apache 2.0, operates as a causal language model with a vision encoder, and targets not only chat but also real-world agentic scenarios. The key engineering decision is a sparse Mixture-of-Experts — the model contains 35B parameters in total, yet only ~3B are activated per token during inference. This strikes a strong balance between the quality of a large model and the serving cost of a small active subset, particularly in coding agents where queries are often long, multi-step, and involve tool calls.

The architecture is a 40-layer hybrid with a hidden dimension of 2048, a vocabulary of 248,320 tokens, and a repeating pattern of 10 × (3 × Gated DeltaNet → MoE + 1 × Gated Attention → MoE). Each MoE block has 256 available experts, with 8 selected per token. Gated DeltaNet powers the majority of layers: it is a more economical linear-attention mechanism that maintains an updatable memory state instead of performing full token-to-token comparisons. Gating allows the model to decide what information to retain, weaken, or discard, making long contexts cheaper in both memory and compute. Gated Attention, in contrast, preserves exact attention behavior in every fourth block. The hybrid design ensures that DeltaNet provides efficiency on long sequences, while Gated Attention periodically restores precise global token connectivity. Additionally, the model was trained with multi-token prediction. Its native context length is 262,144 tokens, expandable to 1,010,000 tokens via YaRN/RoPE scaling.

Among the benchmarks, several are particularly indicative. Terminal-Bench 2.0 tests an agent’s ability to operate in a terminal environment — Qwen3.6-35B-A3B ranks first among open models of comparable size with a score of 51.5, outperforming Qwen3.5-27B, Qwen3.5-35B-A3B, and Gemma4-31B. QwenWebBench evaluates frontend generation across Web Design, Web Apps, Games, SVG, Data Visualization, Animation, and 3D; the model again leads with 1397 Elo. On SkillsBench Avg5, which measures practical coding tasks through OpenCode, it also takes first place with 28.7. On MCPMark, which involves GitHub MCP and tool-use tasks, it scores 37.0 and surpasses all compared models. In the multimodal domain, the model delivers strong results: RealWorldQA — 85.3 (first among comparable open models), MMBench EN-DEV-v1.1 — 92.8, OmniDocBench1.5 — 89.9, CC-OCR — 81.9, VideoMMMU — 83.7, MLVU — 86.2. These metrics align well with scenarios where the model must read interfaces, documents, diagrams, screenshots, video clips, and connect visual information with code or instructions.

For inference, developers recommend SGLang, vLLM, or KTransformers for production/high-throughput deployments, and suggest maintaining at least a 128K-token context when the task requires complex reasoning. Sampling parameters: for thinking mode in general tasks — temperature 1.0, top_p 0.95, top_k 20; for precise coding and WebDev — temperature 0.6, top_p 0.95, top_k 20; for non-thinking mode — temperature 0.7, top_p 0.80, top_k 20.

Use cases: Qwen3.6-35B-A3B excels as a foundation for coding agents that fix bugs, edit multi-file projects, work with terminals, read large repositories, generate frontends, and leverage external tools through Qwen-Agent, Qwen Code, MCP, or compatible APIs. Thanks to multimodality, it is also suited for analyzing screenshots, UI layouts, documents, OCR, video, and visual QA tasks. With preserve_thinking, it is convenient for long agent sessions — the model can retain the reasoning context of previous steps, minimize redundant re-analysis, and perform step-by-step work more stably.


Announce Date: 15.04.2026
Parameters: 36B
Experts: 256
Activated at inference: 3B
Context: 263K
Layers: 40, using full attention: 10
Attention Type: Hybrid Attention
Mamba Type: Gated Delta Net
Developer: Qwen
Transformers Version: 4.57.1
vLLM Version: 0.17.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3.6-35B-A3B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3.6-35B-A3B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
262,144.0
pipeline
3 $0.88 3.100 Launch
teslaa10-2.16.64.160
262,144.0
tensor
2 $0.93 3.594 Launch
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 5.453 Launch
teslaa2-3.32.128.160
262,144.0
pipeline
3 $1.06 3.100 Launch
rtx2080ti-4.16.32.160
262,144.0
tensor
4 $1.12 1.894 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 3.594 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 5.453 Launch
rtx3090-2.16.64.160
262,144.0
tensor
2 $1.56 3.594 Launch
rtx5090-1.16.64.160
262,144.0
1 $1.59 1.242 Launch
rtx3080-4.16.64.160
262,144.0
tensor
4 $1.82 1.183 Launch
rtx4090-2.16.64.160
262,144.0
tensor
2 $1.92 3.594 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 9.782 Launch
h100-1.16.64.160
262,144.0
1 $3.83 9.782 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 12.273 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 23.521 Launch
h200-1.16.128.160
262,144.0
1 $4.74 20.635 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 45.226 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 2.512 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 2.512 Launch
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 4.430 Launch
teslaa10-4.12.48.160
262,144.0
tensor
4 $1.57 8.206 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 4.430 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 8.206 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 6.842 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 4.430 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 8.206 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 3.501 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 8.206 Launch
h100-1.16.64.160
262,144.0
1 $3.83 6.842 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 9.332 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 20.581 Launch
h200-1.16.128.160
262,144.0
1 $4.74 17.694 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 42.286 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.128.240
262,144.0
tensor
4 $1.76 1.865 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.865 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.865 Launch
h100nvl-1.16.96.240
262,144.0
1 $4.12 2.992 Launch
rtx5090-3.16.96.240
262,144.0
pipeline
3 $4.35 2.359 Launch
h200-1.16.128.240
262,144.0
1 $4.74 11.354 Launch
teslaa100-2.24.256.240
262,144.0
tensor
2 $4.93 14.240 Launch
teslaa100-2.24.256.320.nvlink
262,144.0
tensor
2 $4.94 14.240 Launch
rtx5090-4.16.128.320
262,144.0
tensor
4 $5.76 7.558 Launch
h100-2.24.256.240
262,144.0
tensor
2 $7.85 14.240 Launch
h200-2.24.256.240.nvlink
262,144.0
tensor
2 $9.41 35.946 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.