Qwen3-4B-Instruct-2507

Qwen3-4B-Instruct-2507 is a revolutionary model built on an innovative architecture with 4.02 billion parameters (including embeddings), 36 transformer hidden layers, and Group Query Attention (GQA) using 32 attention heads for queries and 8 for keys and values—providing an optimal balance between performance and memory efficiency. The model is optimized from the hybrid Qwen3-4B base to operate exclusively in non-thinking mode, completely eliminating the generation of <think></think> blocks, thereby maximizing query processing speed. Native support for a context length of 262,144 tokens enables efficient handling of large documents, extended conversations, and complex multi-step tasks without degradation in information processing quality.

Architectural innovations include an advanced user-preference alignment system, delivering more relevant and useful responses, along with significant improvements in multilingual content processing.

The model demonstrates outstanding results on key benchmarks, outperforming the proprietary GPT-4.1-nano across all major metrics: MMLU-Pro (69.6 vs 62.8), GPQA (62.0 vs 50.3), and particularly impressive scores on ZebraLogic (80.2 vs 14.8) and creative content generation, where it achieves 83.5 (vs 72.7). The model excels in instruction-following tasks, achieving 83.4% on IFEval and 43.4 on Arena-Hard v2. It also performs exceptionally well in agent-based tasks and tool usage, showing strong results on the BFCL-v3 (61.9) and TAU benchmark suites, making it ideal for integration into automated systems.

Qwen3-4B-Instruct-2507 is highly suitable for business process automation, including customer service via intelligent chatbots, document processing and analysis, report generation, and personalized recommendations. It is effective in creating and localizing SEO-optimized marketing content, product descriptions, social media posts, and more. Thanks to seamless API integration, the model can be deployed for automation within CRM and ERP systems, as well as for any tasks requiring intelligent routing and fast, real-time query processing.


Announce Date: 07.08.2025
Parameters: 5B
Context: 263K
Layers: 36
Attention Type: Full or Sliding Window Attention
Developer: Qwen
Transformers Version: 4.51.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-4B-Instruct-2507 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3-4B-Instruct-2507

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
80,000.0
1 $0.53 0.432 Launch
teslat4-2.16.32.160
80,000.0
tensor
2 $0.54 0.563 Launch
teslaa2-2.16.32.160
80,000.0
tensor
2 $0.57 0.563 Launch
rtx2080ti-2.12.64.160
80,000.0
tensor
2 $0.69 0.313 Launch
rtx3090-1.16.24.160
80,000.0
1 $0.83 0.432 Launch
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 1.224 Launch
rtx4090-1.16.32.160
80,000.0
1 $1.02 0.432 Launch
teslav100-1.12.64.160
80,000.0
1 $1.20 0.632 Launch
rtxa5000-2.16.64.160.nvlink
80,000.0
tensor
2 $1.23 0.963 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 1.224 Launch
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 1.493 Launch
rtx3080-3.16.64.160
80,000.0
pipeline
3 $1.43 0.443 Launch
teslaa10-4.12.48.160
262,144.0
tensor
4 $1.57 2.024 Launch
rtx5090-1.16.64.160
80,000.0
1 $1.59 0.632 Launch
rtx3080-4.16.64.160
80,000.0
tensor
4 $1.82 0.624 Launch
teslav100-2.16.64.240
262,144.0
tensor
2 $2.22 1.363 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 53.290 1.493 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 2.024 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 93.920 1.832 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 81.530 1.493 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 2.024 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 140.620 1.363 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 2.024 Launch
h100-1.16.64.160
262,144.0
1 $3.83 107.330 1.832 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 2.182 Launch
h200-1.16.128.160
262,144.0
1 $4.74 3.357 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
80,000.0
1 $0.53 0.386 Launch
teslat4-2.16.32.160
80,000.0
tensor
2 $0.54 0.517 Launch
teslaa2-2.16.32.160
80,000.0
tensor
2 $0.57 0.517 Launch
rtx3090-1.16.24.160
80,000.0
1 $0.83 0.386 Launch
rtx2080ti-3.12.24.120
80,000.0
pipeline
3 $0.84 0.472 Launch
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 1.178 Launch
rtx4090-1.16.32.160
80,000.0
1 $1.02 0.386 Launch
rtx2080ti-4.16.32.160
80,000.0
tensor
4 $1.12 0.678 Launch
teslav100-1.12.64.160
80,000.0
1 $1.20 0.586 Launch
rtxa5000-2.16.64.160.nvlink
80,000.0
tensor
2 $1.23 0.917 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 1.178 Launch
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 56.230 1.447 Launch
rtx3080-3.16.64.160
80,000.0
pipeline
3 $1.43 0.397 Launch
teslaa10-4.12.48.160
262,144.0
tensor
4 $1.57 1.978 Launch
rtx5090-1.16.64.160
80,000.0
1 $1.59 0.586 Launch
rtx3080-4.16.64.160
80,000.0
tensor
4 $1.82 0.578 Launch
teslav100-2.16.64.240
262,144.0
tensor
2 $2.22 1.317 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 102.570 1.447 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.978 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 131.350 1.786 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 66.130 1.447 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 1.978 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 1.317 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 1.978 Launch
h100-1.16.64.160
262,144.0
1 $3.83 136.710 1.786 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 2.136 Launch
h200-1.16.128.160
262,144.0
1 $4.74 3.311 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
80,000.0
1 $0.53 0.307 Launch
teslat4-2.16.32.160
80,000.0
tensor
2 $0.54 0.438 Launch
teslaa2-2.16.32.160
80,000.0
tensor
2 $0.57 0.438 Launch
rtx3090-1.16.24.160
80,000.0
1 $0.83 0.307 Launch
rtx2080ti-3.12.24.120
80,000.0
pipeline
3 $0.84 0.393 Launch
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 1.099 Launch
rtx4090-1.16.32.160
80,000.0
1 $1.02 0.307 Launch
rtx2080ti-4.16.32.160
80,000.0
tensor
4 $1.12 0.599 Launch
teslav100-1.12.64.160
80,000.0
1 $1.20 0.507 Launch
rtxa5000-2.16.64.160.nvlink
80,000.0
tensor
2 $1.23 0.838 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 1.099 Launch
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 46.800 1.368 Launch
rtx3080-3.16.64.160
80,000.0
pipeline
3 $1.43 0.318 Launch
teslaa10-4.12.48.160
262,144.0
tensor
4 $1.57 1.899 Launch
rtx5090-1.16.64.160
80,000.0
1 $1.59 0.507 Launch
rtx3080-4.16.64.160
80,000.0
tensor
4 $1.82 0.499 Launch
teslav100-2.16.64.240
262,144.0
tensor
2 $2.22 1.238 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 68.870 1.368 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.899 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 102.860 1.707 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 79.880 1.368 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 1.899 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 1.238 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 1.899 Launch
h100-1.16.64.160
262,144.0
1 $3.83 118.700 1.707 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 2.057 Launch
h200-1.16.128.160
262,144.0
1 $4.74 3.232 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.