Qwen3-235B-A22B-Thinking-2507

reasoning

In the 2507 update, developers discontinued the hybrid mode, introducing two highly optimized versions of the flagship Qwen3-235B-A22B model. Qwen3-235B-A22B-Thinking-2507 is the dedicated Thinking version, featuring doubled reasoning length and significantly enhanced chain-of-thought algorithms. The model's architecture remains unchanged—it is still a Mixture-of-Experts (MoE) model with 235 billion total parameters and 128 experts, of which only 22 billion parameters and 8 experts are activated per token, ensuring computational efficiency while preserving the knowledge capacity of the full 235-billion-parameter system. Additionally, developers have implemented native support for a context length of 262,144 tokens, unlocking new possibilities for analyzing lengthy documents, codebases, and performing multi-step reasoning. Alongside the main version, an FP8-quantized model has also been released.

Evaluating the capabilities of Qwen3-235B-A22B-Thinking-2507, it demonstrates phenomenal performance improvements on benchmarks, particularly in agent-based tasks, where it achieves gains of up to 108% on TAU2-Telecom, 93% on TAU2-Airline, and 78% on TAU2-Retail compared to the previous version. In mathematical competitions, the model reaches 92.3% on AIME25, trailing only OpenAI's o4-mini (92.7%), while outperforming all others on HMMT25 with a score of 83.9%. In programming, the model sets new standards with a 74.1% score on LiveCodeBench v6. Similarly, in scientific reasoning, it achieves 81.1% on GPQA, surpassing Claude Opus 4 Thinking's 79.6%.

Qwen3-235B-A22B-Thinking-2507 is ideally suited for solving complex tasks requiring deep analysis: mathematical proofs and Olympiad-level problems, development of sophisticated algorithms and architectural designs, scientific research and data analysis, legal analysis and document drafting—along with many other applications where the emphasis is not on response speed, but on accuracy and logical coherence.


Announce Date: 25.07.2025
Parameters: 235B
Experts: 128
Activated at inference: 22B
Context: 263K
Layers: 94
Attention Type: Full or Sliding Window Attention
Developer: Qwen
Transformers Version: 4.51.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-235B-A22B-Thinking-2507 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3-235B-A22B-Thinking-2507

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-3.32.384.240
262,144.0
pipeline
3 $7.36 2.108 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 1.165 Launch
rtx5090-6.44.256.240
262,144.0
pipeline
6 $8.86 1.029 Launch
teslaa100-4.16.256.240
262,144.0
tensor
4 $9.14 3.587 Launch
h200-2.24.256.240
262,144.0
tensor
2 $9.41 2.965 Launch
rtx5090-8.44.256.240
262,144.0
tensor
8 $11.55 2.148 Launch
h100-3.32.384.240
262,144.0
pipeline
3 $11.73 2.108 Launch
h100-4.16.256.240
262,144.0
tensor
4 $14.96 3.587 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-6.44.512.320.nvlink
262,144.0
pipeline
6 $14.08 3.841 Launch
h200-3.32.512.480
262,144.0
pipeline
3 $14.36 2.910 Launch
h100nvl-4.32.384.480
262,144.0
tensor
4 $16.23 1.956 Launch
teslaa100-8.44.512.320.nvlink
262,144.0
tensor
8 $18.33 6.799 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 5.556 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-8.44.704.960.nvlink
262,144.0
tensor
8 $18.78 1.833 Launch
h200-6.52.896.640
262,144.0
pipeline
6 $28.36 5.884 Launch
h200-8.52.1024.640
262,144.0
tensor
8 $37.34 11.178 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.