Qwen3-4B-Thinking-2507 is an enhanced version of Qwen3-4B. Built on the same base architecture with 4 billion parameters and 36 layers, featuring Group Query Attention (GQA) with 32 heads for queries and 8 for keys/values, it is fundamentally differentiated by specialized training for deep question analysis and multi-step problem solving. The model features extended reasoning length, enabling thorough examination of every aspect of a task before formulating the final answer, along with native support for a 262K-token context. It automatically generates a visible reasoning process within <think></think> blocks, allowing users to track the solution logic while significantly improving the model's inference quality on complex tasks.
The model delivers exceptional performance in tasks requiring deep analysis. On the AIME25 math olympiad benchmark, it achieves a score of 81.3—15.7 points higher than the base version. On HMMT25 (Harvard-MIT math competitions), it scores 55.5, outperforming the base model by 13.4 points. In academic tests at the PhD level, the model achieves results remarkable for a 4-billion-parameter model: GPQA (65.8) and SuperGPQA (47.8). In agent-based tasks, it surpasses many specialized models: BFCL-v3 (71.2), TAU1-Retail (66.1), TAU2-Retail (53.5), confirming its strength in complex, multi-step planning.
Qwen3-4B-Thinking-2507 is ideal for everyday tasks—simple yet requiring thoughtful processing—such as literature review preparation, drafting academic paper templates, and analyzing trends in statistical data. It is also highly effective in solving more complex technical challenges, including software debugging and architectural design, as well as in educational applications such as creating teaching materials and automated grading systems.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 65536 | 160 | 2 | $0.93 | Launch | ||
16 | 65536 | 160 | 4 | $1.18 | Launch | ||
16 | 65536 | 160 | 4 | $1.48 | Launch | ||
16 | 65536 | 160 | 2 | $1.67 | Launch | ||
16 | 65536 | 160 | 4 | $1.82 | Launch | ||
16 | 65536 | 160 | 2 | $2.19 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 2 | $2.93 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 65536 | 160 | 2 | $0.93 | Launch | ||
16 | 65536 | 160 | 4 | $1.18 | Launch | ||
16 | 65536 | 160 | 4 | $1.48 | Launch | ||
16 | 65536 | 160 | 2 | $1.67 | Launch | ||
16 | 65536 | 160 | 2 | $2.19 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 2 | $2.93 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 65536 | 160 | 2 | $0.93 | Launch | ||
16 | 65536 | 160 | 4 | $1.48 | Launch | ||
16 | 65536 | 160 | 2 | $1.67 | Launch | ||
16 | 65536 | 160 | 2 | $2.19 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 2 | $2.93 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.