Qwen3-Coder-Next is an open-weights language model specifically designed for building autonomous coding agents and efficient deployment. The model's key distinguishing feature is its unique architecture, which combines Hybrid Attention and Mixture-of-Experts (MoE) technology.
Qwen3-Coder-Next moves away from the classic Transformer by alternating layers of linear and full attention. Its base block is repeated 12 times and follows the structure: 3 layers of Gated DeltaNet → 1 layer of Gated Attention, with each of these layers accompanied by an MoE block. Gated DeltaNet (Linear Attention): Responsible for efficiency and processing large contexts (this model's context length is 262,144 tokens). This technology is an evolution of linear attention that surpasses Mamba2 in in-context learning tasks. It is configured for speed: it uses 32 heads for Value (V) and 16 heads for Query/Key (QK), with a head dimension of 128. Gated Attention (Full Attention): This is the classic, "heavy" attention necessary for precise reasoning and handling complex dependencies where no detail can be missed. Its configuration differs: 16 heads for Query (Q) and only 2 heads for Key/Value (KV) (using grouped attention for memory efficiency), but with an increased head dimension of 256. Working on top of the attention layers is the Mixture-of-Experts (MoE) system. The model contains a total of 512 experts, of which only 10 are activated per token (plus 1 shared expert that is always active). This is precisely what allows the model to keep only 3 billion parameters active out of a total of 80 billion.
The model was trained using a multi-stage approach, including pre-training on a large volume of natural and synthetic data, mid-training for specialization in code and agentic tasks, and post-training with Supervised Fine-Tuning (SFT), Reinforcement Learning (RL), and expert distillation. A distinctive feature of the methodology is agentic training, where the model was trained on tasks with executable environments, allowing it to learn directly from execution feedback. This significantly improves the model's ability for multi-step reasoning, tool usage, and error correction in realistic development conditions.
On key benchmarks for coding agents, the model demonstrates competitive results. On SWE-Bench Verified, Qwen3-Coder-Next shows results around 70.6%, competing with models that have an order of magnitude more active parameters. On the more challenging SWE-Bench Pro, the model achieves 44.3%, demonstrating its ability to solve complex tasks requiring step-by-step planning. In tests on adhering to various tool-calling patterns across different IDE/CLI environments, the model shows excellent quality (92.7% on average), surpassing many top-tier open and proprietary models, confirming its readiness to work in diverse development environments.
The model's use cases are primarily programming-oriented. Its low inference requirements make it ideal for deployment on developers' local or cloud machines as an intelligent assistant integrated into an IDE or CLI. The model can analyze the context of an entire repository, localize issues, and propose correct patches, which is valuable for automating code reviews and project maintenance.
| Model Name | Context | Type | GPU | TPS | Status | Link |
|---|---|---|---|---|---|---|
| bullpoint/Qwen3-Coder-Next-AWQ-4bit | 262,144.0 | Public | 3×RTX4090 | AVAILABLE | chat |
curl https://chat.immers.cloud/v1/endpoints/qwen3-coder-test/generate/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer USER_API_KEY" \
-d '{"model": "Qwen3-Coder-Next", "messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"}
], "temperature": 0, "max_tokens": 150}'
$response = Invoke-WebRequest https://chat.immers.cloud/v1/endpoints/qwen3-coder-test/generate/chat/completions `
-Method POST `
-Headers @{
"Authorization" = "Bearer USER_API_KEY"
"Content-Type" = "application/json"
} `
-Body (@{
model = "Qwen3-Coder-Next"
messages = @(
@{ role = "system"; content = "You are a helpful assistant." },
@{ role = "user"; content = "Say this is a test" }
)
} | ConvertTo-Json)
($response.Content | ConvertFrom-Json).choices[0].message.content
#!pip install OpenAI --upgrade
from openai import OpenAI
client = OpenAI(
api_key="USER_API_KEY",
base_url="https://chat.immers.cloud/v1/endpoints/qwen3-coder-test/generate/",
)
chat_response = client.chat.completions.create(
model="Qwen3-Coder-Next",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"},
]
)
print(chat_response.choices[0].message.content)
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | vCPU | RAM, MB | Disk, GB | GPU | TPS | |||
|---|---|---|---|---|---|---|---|---|
262,144.0 pipeline |
16 | 98304 | 160 | 3 | $1.34 | Launch | ||
262,144.0 tensor |
16 | 65536 | 160 | 4 | $1.62 | Launch | ||
262,144.0 pipeline |
32 | 131072 | 160 | 6 | $1.65 | Launch | ||
262,144.0 tensor |
16 | 65535 | 240 | 2 | $2.22 | Launch | ||
262,144.0 pipeline |
16 | 98304 | 160 | 3 | $2.29 | Launch | ||
262,144.0 tensor |
16 | 131072 | 160 | 4 | $2.34 | Launch | ||
262,144.0 |
16 | 65536 | 160 | 1 | $2.37 | Launch | ||
262,144.0 pipeline |
16 | 98304 | 160 | 3 | $2.83 | Launch | ||
262,144.0 tensor |
16 | 65536 | 160 | 4 | $2.89 | Launch | ||
262,144.0 tensor |
16 | 65536 | 160 | 2 | $2.93 | Launch | ||
262,144.0 tensor |
16 | 65536 | 160 | 4 | $3.60 | Launch | ||
262,144.0 |
16 | 65536 | 160 | 1 | $3.83 | Launch | ||
262,144.0 |
16 | 98304 | 160 | 1 | $4.11 | Launch | ||
262,144.0 |
16 | 131072 | 160 | 1 | $4.74 | Launch | ||
| Name | vCPU | RAM, MB | Disk, GB | GPU | TPS | |||
|---|---|---|---|---|---|---|---|---|
262,144.0 pipeline |
24 | 196608 | 160 | 6 | $3.50 | Launch | ||
262,144.0 |
16 | 98304 | 160 | 1 | $4.11 | Launch | ||
262,144.0 tensor |
32 | 98304 | 160 | 4 | $4.35 | Launch | ||
262,144.0 tensor |
24 | 98304 | 160 | 2 | $4.61 | Launch | ||
262,144.0 tensor |
24 | 262144 | 160 | 8 | $4.61 | Launch | ||
262,144.0 |
16 | 131072 | 160 | 1 | $4.74 | Launch | ||
262,144.0 tensor |
16 | 131072 | 160 | 4 | $5.74 | Launch | ||
262,144.0 pipeline |
44 | 262144 | 160 | 6 | $5.83 | Launch | ||
262,144.0 tensor |
44 | 262144 | 160 | 8 | $7.51 | Launch | ||
262,144.0 tensor |
24 | 262144 | 160 | 2 | $7.84 | Launch | ||
| Name | vCPU | RAM, MB | Disk, GB | GPU | TPS | |||
|---|---|---|---|---|---|---|---|---|
262,144.0 pipeline |
32 | 393216 | 240 | 3 | $7.36 | Launch | ||
262,144.0 tensor |
24 | 196608 | 240 | 2 | $8.17 | Launch | ||
262,144.0 pipeline |
44 | 262144 | 240 | 6 | $8.86 | Launch | ||
262,144.0 tensor |
16 | 262144 | 240 | 4 | $9.14 | Launch | ||
262,144.0 tensor |
24 | 262144 | 240 | 2 | $9.41 | Launch | ||
262,144.0 tensor |
44 | 262144 | 240 | 8 | $11.55 | Launch | ||
262,144.0 pipeline |
32 | 393216 | 240 | 3 | $11.73 | Launch | ||
262,144.0 tensor |
16 | 262144 | 240 | 4 | $14.96 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.