Qwen3-235B-A22B

reasoning

Qwen3-235B-A22B is the flagship model of the Qwen3 series and one of the largest open language models in the world. It has a total parameter count of 235 billion, with 22 billion parameters activated for each token. This was made possible by an efficient Mixture of Experts (MoE) architecture that includes 128 experts, of which only 8 are engaged at each computational step. Innovative improvements to the attention mechanism ensure high accuracy in context processing and support sequence lengths of up to 40K tokens.

One of the key features of Qwen3-235B-A22B is its support for two operational modes: thinking and no-thinking . In thinking mode , the model applies extended logical reasoning and additional computational resources to deeply analyze tasks, achieving the highest level of precision and depth in its responses. The no-thinking mode, on the other hand, is optimized for quickly performing simple tasks such as text formatting, translation, or short-answer queries, without unnecessary use of computing power. This functionality gives users flexibility in balancing speed and output quality.

Qwen3-235B-A22B can be applied in scientific research, software development, test automation, technical documentation processing, and AI agent creation. It is suitable for academic environments, as well as government and corporate projects where high accuracy, scalability, and flexible task customization are essential. Its support for 119 languages also makes it convenient for international use.


Announce Date: 29.04.2025
Parameters: 235B
Experts: 128
Activated at inference: 22B
Context: 41K
Layers: 94
Attention Type: Full or Sliding Window Attention
Developer: Qwen
Transformers Version: 4.51.0
vLLM Version: 0.8.5
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-235B-A22B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3-235B-A22B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
40,960.0
1 $0.39 -15.338 Launch
teslat4-1.16.64.160
40,960.0
1 $0.42 -15.338 Launch
rtx2080ti-2.16.64.160
40,960.0
tensor
2 $0.71 -14.943 Launch
rtx3080-2.16.64.160
40,960.0
tensor
2 $1.03 -15.188 Launch
rtx4090-1.32.64.160
40,960.0
1 $1.18 -14.358 Launch
rtxa5000-2.16.64.160.nvlink
40,960.0
tensor
2 $1.23 -11.757 Launch
rtx5090-1.32.64.160
40,960.0
1 $1.69 -13.377 Launch
teslaa10-4.16.128.160
40,960.0
tensor
4 $1.75 -6.555 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 -7.495 Launch
rtx3090-4.16.128.160
40,960.0
tensor
4 $3.01 -6.555 Launch
h100-1.16.64.160
40,960.0
1 $3.83 -7.495 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 -5.779 Launch
teslaa100-2.24.256.160.nvlink
40,960.0
tensor
2 $4.93 1.969 Launch
h200-2.24.256.160.nvlink
40,960.0
tensor
2 $9.40 16.921 Launch
h200-4.32.768.480
40,960.0
tensor
4 $19.23 50.800 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
40,960.0
1 $0.39 -30.930 Launch
teslat4-1.16.64.160
40,960.0
1 $0.42 -30.930 Launch
rtx2080ti-2.16.64.160
40,960.0
tensor
2 $0.71 -30.535 Launch
rtx3080-2.16.64.160
40,960.0
tensor
2 $1.03 -30.780 Launch
rtx4090-1.32.64.160
40,960.0
1 $1.18 -29.949 Launch
rtxa5000-2.16.64.160.nvlink
40,960.0
tensor
2 $1.23 -27.348 Launch
rtx5090-1.32.64.160
40,960.0
1 $1.69 -28.969 Launch
teslaa10-4.16.128.160
40,960.0
tensor
4 $1.75 -22.147 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 -23.086 Launch
rtx3090-4.16.128.160
40,960.0
tensor
4 $3.01 -22.147 Launch
h100-1.16.64.160
40,960.0
1 $3.83 -23.086 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 -21.371 Launch
teslaa100-2.24.256.160.nvlink
40,960.0
tensor
2 $4.93 -13.622 Launch
h200-2.24.256.160.nvlink
40,960.0
tensor
2 $9.40 1.329 Launch
h200-4.32.768.480
40,960.0
tensor
4 $19.23 35.208 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
40,960.0
1 $0.39 -62.359 Launch
teslat4-1.16.64.160
40,960.0
1 $0.42 -62.359 Launch
rtx2080ti-2.16.64.160
40,960.0
tensor
2 $0.71 -61.964 Launch
rtx3080-2.16.64.160
40,960.0
tensor
2 $1.03 -62.209 Launch
rtx4090-1.32.64.160
40,960.0
1 $1.18 -61.379 Launch
rtxa5000-2.16.64.160.nvlink
40,960.0
tensor
2 $1.23 -58.778 Launch
rtx5090-1.32.64.160
40,960.0
1 $1.69 -60.398 Launch
teslaa10-4.16.128.160
40,960.0
tensor
4 $1.75 -53.576 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 -54.516 Launch
rtx3090-4.16.128.160
40,960.0
tensor
4 $3.01 -53.576 Launch
h100-1.16.64.160
40,960.0
1 $3.83 -54.516 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 -52.800 Launch
teslaa100-2.24.256.160.nvlink
40,960.0
tensor
2 $4.93 -45.052 Launch
h200-2.24.256.160.nvlink
40,960.0
tensor
2 $9.40 -30.100 Launch
h200-4.32.768.480
40,960.0
tensor
4 $19.23 3.779 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.