Qwen3-Coder-Next

Qwen3-Coder-Next is an open-weights language model specifically designed for building autonomous coding agents and efficient deployment. The model's key distinguishing feature is its unique architecture, which combines Hybrid Attention and Mixture-of-Experts (MoE) technology.

Qwen3-Coder-Next moves away from the classic Transformer by alternating layers of linear and full attention. Its base block is repeated 12 times and follows the structure: 3 layers of Gated DeltaNet → 1 layer of Gated Attention, with each of these layers accompanied by an MoE block. Gated DeltaNet (Linear Attention): Responsible for efficiency and processing large contexts (this model's context length is 262,144 tokens). This technology is an evolution of linear attention that surpasses Mamba2 in in-context learning tasks. It is configured for speed: it uses 32 heads for Value (V) and 16 heads for Query/Key (QK), with a head dimension of 128. Gated Attention (Full Attention): This is the classic, "heavy" attention necessary for precise reasoning and handling complex dependencies where no detail can be missed. Its configuration differs: 16 heads for Query (Q) and only 2 heads for Key/Value (KV) (using grouped attention for memory efficiency), but with an increased head dimension of 256. Working on top of the attention layers is the Mixture-of-Experts (MoE) system. The model contains a total of 512 experts, of which only 10 are activated per token (plus 1 shared expert that is always active). This is precisely what allows the model to keep only 3 billion parameters active out of a total of 80 billion.

The model was trained using a multi-stage approach, including pre-training on a large volume of natural and synthetic data, mid-training for specialization in code and agentic tasks, and post-training with Supervised Fine-Tuning (SFT), Reinforcement Learning (RL), and expert distillation. A distinctive feature of the methodology is agentic training, where the model was trained on tasks with executable environments, allowing it to learn directly from execution feedback. This significantly improves the model's ability for multi-step reasoning, tool usage, and error correction in realistic development conditions.

On key benchmarks for coding agents, the model demonstrates competitive results. On SWE-Bench Verified, Qwen3-Coder-Next shows results around 70.6%, competing with models that have an order of magnitude more active parameters. On the more challenging SWE-Bench Pro, the model achieves 44.3%, demonstrating its ability to solve complex tasks requiring step-by-step planning. In tests on adhering to various tool-calling patterns across different IDE/CLI environments, the model shows excellent quality (92.7% on average), surpassing many top-tier open and proprietary models, confirming its readiness to work in diverse development environments.

The model's use cases are primarily programming-oriented. Its low inference requirements make it ideal for deployment on developers' local or cloud machines as an intelligent assistant integrated into an IDE or CLI. The model can analyze the context of an entire repository, localize issues, and propose correct patches, which is valuable for automating code reviews and project maintenance.


Announce Date: 30.01.2026
Parameters: 79.674391296B
Experts: 512
Activated at inference: 3B
Context: 263K
Layers: 48, using full attention: 12
Attention Type: Linear Attention
Developer: Qwen
Transformers Version: 4.57.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-Coder-Next capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
bullpoint/Qwen3-Coder-Next-AWQ-4bit 262,144.0 Public 3×RTX4090 AVAILABLE chat

API access to Qwen3-Coder-Next endpoints

curl https://chat.immers.cloud/v1/endpoints/qwen3-coder-test/generate/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer USER_API_KEY" \
-d '{"model": "Qwen3-Coder-Next", "messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"}
], "temperature": 0, "max_tokens": 150}'
$response = Invoke-WebRequest https://chat.immers.cloud/v1/endpoints/qwen3-coder-test/generate/chat/completions `
-Method POST `
-Headers @{
"Authorization" = "Bearer USER_API_KEY"
"Content-Type" = "application/json"
} `
-Body (@{
model = "Qwen3-Coder-Next"
messages = @(
@{ role = "system"; content = "You are a helpful assistant." },
@{ role = "user"; content = "Say this is a test" }
)
} | ConvertTo-Json)
($response.Content | ConvertFrom-Json).choices[0].message.content
#!pip install OpenAI --upgrade

from openai import OpenAI

client = OpenAI(
api_key="USER_API_KEY",
base_url="https://chat.immers.cloud/v1/endpoints/qwen3-coder-test/generate/",
)

chat_response = client.chat.completions.create(
model="Qwen3-Coder-Next",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"},
]
)
print(chat_response.choices[0].message.content)

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen3-Coder-Next

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour TPS
teslaa10-3.16.96.160
262,144.0
pipeline
16 98304 160 3 $1.34 Launch
teslaa10-4.16.64.160
262,144.0
tensor
16 65536 160 4 $1.62 Launch
teslaa2-6.32.128.160
262,144.0
pipeline
32 131072 160 6 $1.65 Launch
teslav100-2.16.64.240
262,144.0
tensor
16 65535 240 2 $2.22 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
16 98304 160 3 $2.29 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
16 131072 160 4 $2.34 Launch
teslaa100-1.16.64.160
262,144.0
16 65536 160 1 $2.37 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
16 98304 160 3 $2.83 Launch
rtx3090-4.16.64.160
262,144.0
tensor
16 65536 160 4 $2.89 Launch
rtx5090-2.16.64.160
262,144.0
tensor
16 65536 160 2 $2.93 Launch
rtx4090-4.16.64.160
262,144.0
tensor
16 65536 160 4 $3.60 Launch
h100-1.16.64.160
262,144.0
16 65536 160 1 $3.83 Launch
h100nvl-1.16.96.160
262,144.0
16 98304 160 1 $4.11 Launch
h200-1.16.128.160
262,144.0
16 131072 160 1 $4.74 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour TPS
rtxa5000-6.24.192.160.nvlink
262,144.0
pipeline
24 196608 160 6 $3.50 Launch
h100nvl-1.16.96.160
262,144.0
16 98304 160 1 $4.11 Launch
teslav100-4.32.96.160
262,144.0
tensor
32 98304 160 4 $4.35 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
24 98304 160 2 $4.61 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
24 262144 160 8 $4.61 Launch
h200-1.16.128.160
262,144.0
16 131072 160 1 $4.74 Launch
rtx5090-4.16.128.160
262,144.0
tensor
16 131072 160 4 $5.74 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
44 262144 160 6 $5.83 Launch
rtx4090-8.44.256.160
262,144.0
tensor
44 262144 160 8 $7.51 Launch
h100-2.24.256.160
262,144.0
tensor
24 262144 160 2 $7.84 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour TPS
teslaa100-3.32.384.240
262,144.0
pipeline
32 393216 240 3 $7.36 Launch
h100nvl-2.24.192.240
262,144.0
tensor
24 196608 240 2 $8.17 Launch
rtx5090-6.44.256.240
262,144.0
pipeline
44 262144 240 6 $8.86 Launch
teslaa100-4.16.256.240
262,144.0
tensor
16 262144 240 4 $9.14 Launch
h200-2.24.256.240
262,144.0
tensor
24 262144 240 2 $9.41 Launch
rtx5090-8.44.256.240
262,144.0
tensor
44 262144 240 8 $11.55 Launch
h100-3.32.384.240
262,144.0
pipeline
32 393216 240 3 $11.73 Launch
h100-4.16.256.240
262,144.0
tensor
16 262144 240 4 $14.96 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.