Qwen3.5-27B

reasoning
multimodal

The Qwen3.5-27B is a dense model from the series with 27 billion parameters, utilizing 64 layers and a hidden representation size of 4096. Unlike the MoE models in the series, the 27B does not use expert routing, which ensures more predictable behavior and stability in tasks requiring sequential logical inference. It retains a hybrid attention mechanism, ensuring efficient processing of long sequences (native context window of 262K tokens).

Thanks to full parameter activation, the model demonstrates superior results in tasks requiring adherence to complex instructions. Its score on the IFEval benchmark (95.0) is the highest in the medium-sized lineup, confirming its excellent ability to precisely follow user instructions. In mathematical reasoning (e.g., HMMT Feb 25 – 92.0) and programming (SWE-bench Verified – 72.4, LiveCodeBench v6 – 80.7), it achieves top-tier results, outperforming MoE-based architectures. Its multimodal capabilities are also top-notch: it holds the best result in the family on the BabyVision test (44.6) and is among the best in MathVision (86.0) and video understanding (VideoMME – 87.0).

The uniqueness of Qwen3.5-27B lies in its reliability and predictability for engineering tasks. It is the ideal choice for fintech applications, legal analysis, document automation, and building complex customer support chatbots, where accuracy and stability of responses are more critical than marginal computational savings. It stands out from MoE models due to its determinism and ease of optimization for specific tasks.


Announce Date: 24.02.2026
Parameters: 28B
Context: 263K
Layers: 64, using full attention: 16
Attention Type: Linear Attention
Developer: Qwen
Transformers Version: 4.57.0.dev0
vLLM Version: 0.17.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3.5-27B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3.5-27B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
262,144.0
tensor
3 $0.88 1.054 Launch
teslaa10-2.16.64.160
262,144.0
tensor
2 $0.93 1.209 Launch
teslaa2-3.32.128.160
262,144.0
tensor
3 $1.06 1.054 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 1.209 Launch
rtx3090-2.16.64.160
262,144.0
tensor
2 $1.56 1.209 Launch
rtx4090-2.16.64.160
262,144.0
tensor
2 $1.92 1.209 Launch
teslav100-2.16.64.240
262,144.0
tensor
2 $2.22 2.101 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 3.148 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 2.101 Launch
h100-1.16.64.160
262,144.0
1 $3.83 3.148 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 3.928 Launch
h200-1.16.128.160
262,144.0
1 $4.74 6.549 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 1.168 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 1.168 Launch
teslaa10-3.16.96.160
262,144.0
tensor
3 $1.34 1.769 Launch
teslav100-2.16.64.240
262,144.0
tensor
2 $2.22 1.478 Launch
rtx3090-3.16.96.160
262,144.0
tensor
3 $2.29 1.769 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 2.952 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 2.524 Launch
rtx4090-3.16.96.160
262,144.0
tensor
3 $2.83 1.769 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 1.478 Launch
h100-1.16.64.160
262,144.0
1 $3.83 2.524 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 3.305 Launch
h200-1.16.128.160
262,144.0
1 $4.74 5.925 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
262,144.0
tensor
6 $1.65 1.217 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 1.527 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.527 Launch
teslaa100-1.16.128.160
262,144.0
1 $2.50 1.100 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.527 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.527 Launch
teslav100-3.64.256.320
262,144.0
tensor
3 $3.89 1.682 Launch
h100-1.16.128.160
262,144.0
1 $3.95 1.100 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.880 Launch
rtx5090-3.16.96.160
262,144.0
tensor
3 $4.34 1.682 Launch
h200-1.16.128.160
262,144.0
1 $4.74 4.500 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.