Qwen3.5-2B

reasoning
multimodal

The Qwen3.5-2B is a small yet fully-featured model in the series with 2 billion parameters, preserving the core architectural advantages of Qwen3.5. The model consists of 24 layers with 6 layers of full attention and 2 KV heads, and a hidden representation size of 2048. Its hybrid attention architecture (Gated DeltaNet + Gated Attention) ensures efficient processing of long sequences with minimal memory consumption. The model supports a native context window of 262K tokens and the series' multimodal capabilities.

By default, the model operates in non-thinking mode, but it can easily be switched to thinking mode, generating internal reasoning within <think> tags. This allows developers and researchers to see firsthand how even a small model can structure its "thoughts" before responding. In language benchmarks with the thinking mode enabled, the model demonstrates a significant quality increase. For example, MMLU-Pro improves from 55.3 to 66.5, and SuperGPQA from 30.4 to 37.5, highlighting the importance of reasoning even for small models. Its multimodal abilities are also impressive: Mathvista(mini) (76.7), OCRBench (84.5), and RealWorldQA (74.5) are excellent scores for a 2B model. This makes it useful for simple text and object recognition tasks in images, question-answering systems based on charts, and rapid prototyping of multimodal functions.

The Qwen3.5-2B is ideal as a research tool and a platform for quickly testing hypotheses. It is suitable for startups, university labs, and developers who want to explore the capabilities of hybrid architectures and thinking mode before scaling up to larger models. Its main advantage is its minimal resource requirements while retaining all the key technologies of the Qwen3.5 family.


Announce Date: 28.02.2026
Parameters: 3B
Context: 263K
Layers: 24, using full attention: 6
Attention Type: Linear Attention
Developer: Qwen
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3.5-2B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3.5-2B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
262,144.0
1 $0.33 3.170 Launch
rtx2080ti-1.10.16.500
262,144.0
1 $0.38 1.680 Launch
teslaa2-1.16.32.160
262,144.0
1 $0.38 3.170 Launch
teslaa10-1.16.32.160
262,144.0
1 $0.53 5.556 Launch
rtx3080-1.16.32.160
262,144.0
1 $0.57 1.382 Launch
rtx3090-1.16.24.160
262,144.0
1 $0.83 5.556 Launch
rtx4090-1.16.32.160
262,144.0
1 $1.02 5.556 Launch
teslav100-1.12.64.160
262,144.0
1 $1.20 7.941 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 11.883 Launch
rtx5090-1.16.64.160
262,144.0
1 $1.59 7.941 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 22.252 Launch
h100-1.16.64.160
262,144.0
1 $3.83 22.252 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 26.426 Launch
h200-1.16.128.160
262,144.0
1 $4.74 40.438 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
262,144.0
1 $0.33 3.241 Launch
rtx2080ti-1.10.16.500
262,144.0
1 $0.38 1.750 Launch
teslaa2-1.16.32.160
262,144.0
1 $0.38 3.241 Launch
teslaa10-1.16.32.160
262,144.0
1 $0.53 5.626 Launch
rtx3080-1.16.32.160
262,144.0
1 $0.57 1.452 Launch
rtx3090-1.16.24.160
262,144.0
1 $0.83 5.626 Launch
rtx4090-1.16.32.160
262,144.0
1 $1.02 5.626 Launch
teslav100-1.12.64.160
262,144.0
1 $1.20 8.011 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 11.953 Launch
rtx5090-1.16.64.160
262,144.0
1 $1.59 8.011 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 22.322 Launch
h100-1.16.64.160
262,144.0
1 $3.83 22.322 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 26.496 Launch
h200-1.16.128.160
262,144.0
1 $4.74 40.509 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
262,144.0
1 $0.33 2.539 Launch
rtx2080ti-1.10.16.500
262,144.0
1 $0.38 1.048 Launch
teslaa2-1.16.32.160
262,144.0
1 $0.38 2.539 Launch
teslaa10-1.16.32.160
262,144.0
1 $0.53 4.924 Launch
rtx3080-1.16.32.160
262,144.0
1 $0.57 0.750 Launch
rtx3090-1.16.24.160
262,144.0
1 $0.83 4.924 Launch
rtx4090-1.16.32.160
262,144.0
1 $1.02 4.924 Launch
teslav100-1.12.64.160
262,144.0
1 $1.20 7.309 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 11.251 Launch
rtx5090-1.16.64.160
262,144.0
1 $1.59 7.309 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 21.620 Launch
h100-1.16.64.160
262,144.0
1 $3.83 21.620 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 25.794 Launch
h200-1.16.128.160
262,144.0
1 $4.74 39.807 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.