Qwen3-Next-80B-A3B-Thinking

reasoning

Qwen3-Next-80B-A3B-Thinking — a representative of the new Qwen3-Next family, in which standard attention is replaced with a hybrid of Gated DeltaNet and Gated Attention for efficient modeling of ultra-long contexts. The model also implements an ultra-sparse Mixture-of-Experts (MoE) architecture, activating only 10 experts plus 1 shared expert per token out of 512 total experts. This yields a parameter activation ratio of just 3.7%, significantly lower than traditional MoE models. Technical optimizations for training stability include zero-centered and weight-decayed layernorm, addressing issues of abnormal weight growth in layer normalization. The model also employs Multi-Token Prediction (MTP) to accelerate inference and enhance pretraining performance. The native context length is 262,144 tokens and can be extended up to 1,010,000 tokens using the YaRN technique.

Qwen3-Next-80B-A3B-Thinking demonstrates outstanding results on key benchmarks, surpassing both Gemini-2.5-Flash-Thinking and previous-generation Qwen models. On the AIME25 mathematical benchmark, which evaluates complex mathematical problem-solving at the Olympiad level, the model achieves 87.8% versus 72.0% for Gemini. On HMMT25, which tests high-level mathematical reasoning, it scores 73.9% compared to 64.2%. On the LiveCodeBench v6 benchmark, assessing real-world programming performance, it reaches 68.7% versus 61.2% for its competitor. On the comprehensive Arena-Hard v2 benchmark, it achieves 62.3% versus 56.7%.

Its specialization in complex reasoning makes the Thinking version ideal for tasks requiring deep analysis. The model is an excellent choice for step-by-step reasoning, detailed logical inference, processing long documents, cross-referenced analytics, agent pipelines, and of course, mathematical problem solving. Developers recommend using output lengths of up to 32,768 tokens for most queries and up to 81,920 tokens for particularly complex tasks.


Announce Date: 11.09.2025
Parameters: 81.3B
Experts: 512
Activated: 3B
Context: 263K
VRAM requirements: 37.9 GB using 4 bits quantization
Developer: Qwen
Transformers Version: 4.57.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-Next-80B-A3B-Thinking capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen3-Next-80B-A3B-Thinking

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-2.16.64.160
262,144.0
16 65536 160 2 $0.93 Launch
teslat4-4.16.64.160
262,144.0
16 65536 160 4 $0.96 Launch
rtx2080ti-4.16.64.160
262,144.0
16 65536 160 4 $1.18 Launch
rtx3090-2.16.64.160
262,144.0
16 65536 160 2 $1.67 Launch
rtx3080-4.16.64.160
262,144.0
16 65536 160 4 $1.82 Launch
rtx4090-2.16.64.160
262,144.0
16 65536 160 2 $2.19 Launch
teslav100-2.16.64.240
262,144.0
16 65535 240 2 $2.22 Launch
teslaa100-1.16.64.160
262,144.0
16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160
262,144.0
16 65536 160 2 $2.93 Launch
teslah100-1.16.64.160
262,144.0
16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-4.16.128.160
262,144.0
16 131072 160 4 $1.75 Launch
teslaa100-1.16.128.160
262,144.0
16 131072 160 1 $2.71 Launch
rtx3090-4.16.128.160
262,144.0
16 131072 160 4 $3.23 Launch
rtx4090-4.16.128.160
262,144.0
16 131072 160 4 $4.26 Launch
rtx5090-3.16.96.160
262,144.0
16 98304 160 3 $4.34 Launch
teslah100-1.16.128.160
262,144.0
16 131072 160 1 $5.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.256.240
262,144.0
24 262144 240 2 $5.36 Launch
teslah100-2.24.256.240
262,144.0
24 262144 240 2 $10.41 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.