The Qwen3.5-2B is a small yet fully-featured model in the series with 2 billion parameters, preserving the core architectural advantages of Qwen3.5. The model consists of 24 layers with 6 layers of full attention and 2 KV heads, and a hidden representation size of 2048. Its hybrid attention architecture (Gated DeltaNet + Gated Attention) ensures efficient processing of long sequences with minimal memory consumption. The model supports a native context window of 262K tokens and the series' multimodal capabilities.
By default, the model operates in non-thinking mode, but it can easily be switched to thinking mode, generating internal reasoning within <think> tags. This allows developers and researchers to see firsthand how even a small model can structure its "thoughts" before responding. In language benchmarks with the thinking mode enabled, the model demonstrates a significant quality increase. For example, MMLU-Pro improves from 55.3 to 66.5, and SuperGPQA from 30.4 to 37.5, highlighting the importance of reasoning even for small models. Its multimodal abilities are also impressive: Mathvista(mini) (76.7), OCRBench (84.5), and RealWorldQA (74.5) are excellent scores for a 2B model. This makes it useful for simple text and object recognition tasks in images, question-answering systems based on charts, and rapid prototyping of multimodal functions.
The Qwen3.5-2B is ideal as a research tool and a platform for quickly testing hypotheses. It is suitable for startups, university labs, and developers who want to explore the capabilities of hybrid architectures and thinking mode before scaling up to larger models. Its main advantage is its minimal resource requirements while retaining all the key technologies of the Qwen3.5 family.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $0.33 | 3.170 | Launch | ||
262,144.0 |
1 | $0.38 | 1.680 | Launch | ||
262,144.0 |
1 | $0.38 | 3.170 | Launch | ||
262,144.0 |
1 | $0.53 | 5.556 | Launch | ||
262,144.0 |
1 | $0.57 | 1.382 | Launch | ||
262,144.0 |
1 | $0.83 | 5.556 | Launch | ||
262,144.0 |
1 | $1.02 | 5.556 | Launch | ||
262,144.0 |
1 | $1.20 | 7.941 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 11.883 | Launch | ||
262,144.0 |
1 | $1.59 | 7.941 | Launch | ||
262,144.0 |
1 | $2.37 | 22.252 | Launch | ||
262,144.0 |
1 | $3.83 | 22.252 | Launch | ||
262,144.0 |
1 | $4.11 | 26.426 | Launch | ||
262,144.0 |
1 | $4.74 | 40.438 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $0.33 | 3.241 | Launch | ||
262,144.0 |
1 | $0.38 | 1.750 | Launch | ||
262,144.0 |
1 | $0.38 | 3.241 | Launch | ||
262,144.0 |
1 | $0.53 | 5.626 | Launch | ||
262,144.0 |
1 | $0.57 | 1.452 | Launch | ||
262,144.0 |
1 | $0.83 | 5.626 | Launch | ||
262,144.0 |
1 | $1.02 | 5.626 | Launch | ||
262,144.0 |
1 | $1.20 | 8.011 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 11.953 | Launch | ||
262,144.0 |
1 | $1.59 | 8.011 | Launch | ||
262,144.0 |
1 | $2.37 | 22.322 | Launch | ||
262,144.0 |
1 | $3.83 | 22.322 | Launch | ||
262,144.0 |
1 | $4.11 | 26.496 | Launch | ||
262,144.0 |
1 | $4.74 | 40.509 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $0.33 | 2.539 | Launch | ||
262,144.0 |
1 | $0.38 | 1.048 | Launch | ||
262,144.0 |
1 | $0.38 | 2.539 | Launch | ||
262,144.0 |
1 | $0.53 | 4.924 | Launch | ||
262,144.0 |
1 | $0.57 | 0.750 | Launch | ||
262,144.0 |
1 | $0.83 | 4.924 | Launch | ||
262,144.0 |
1 | $1.02 | 4.924 | Launch | ||
262,144.0 |
1 | $1.20 | 7.309 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 11.251 | Launch | ||
262,144.0 |
1 | $1.59 | 7.309 | Launch | ||
262,144.0 |
1 | $2.37 | 21.620 | Launch | ||
262,144.0 |
1 | $3.83 | 21.620 | Launch | ||
262,144.0 |
1 | $4.11 | 25.794 | Launch | ||
262,144.0 |
1 | $4.74 | 39.807 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.