Kimi K2.6, the flagship model from Moonshot AI, is built on a sparse Mixture‑of‑Experts (MoE) architecture with 1 trillion total parameters, of which only 32 billion are activated per token. The model comprises 61 layers (including one dense layer), 384 experts (8 routed and 1 shared per token), and 64 attention heads. As in previous versions, it employs Multi‑head Latent Attention (MLA), which compresses the key‑value cache into a low‑rank latent space, dramatically reducing memory usage on long contexts of up to 262,144 tokens. To stabilize training at the trillion‑parameter scale, the MuonClip optimizer is used, and the built‑in 400M‑parameter visual encoder MoonViT provides native image and video understanding without external modules.
K2.6 undergoes Quantization‑Aware Training (QAT) directly during post‑training, making it optimized for 4‑bit weights and it is released by the developers in exactly this format. The officially supported inference frameworks are vLLM, SGLang, and KTransformers. The model offers two operating modes. Thinking Mode delivers a full Chain‑of‑Thought reasoning cycle (recommended temperature 1.0) and is intended for complex multi‑step tasks. Instant Mode provides fast, deterministic responses (temperature 0.6, top‑p 0.95) for interactive scenarios. The ability to perform “interleaved thinking”—reasoning between tool calls rather than forming a single monolithic plan at the start—is what makes it efficient on workflows spanning thousands of steps. The preserve_thinking option allows full reasoning traces to be kept across consecutive tool invocations, which is critical for long programming and agentic sessions.
K2.6 extends the same architectural principles as K2.5 but brings major upgrades: a qualitative leap in long‑horizon coding, scaling the Agent Swarm from 100 to 300 sub‑agents, and increasing the number of coordinated steps from 1,500 to 4,000. These improvements place K2.6 among the leading open models and allow it to compete on equal terms with top closed systems. On Humanity’s Last Exam (HLE‑Full) with tools, the model scores 54.0, ahead of GPT‑5.4 (52.1), Claude Opus 4.6 (53.0), and Gemini 3.1 Pro (51.4). On SWE‑Bench Pro, which measures software engineering capabilities, K2.6 reaches 58.6, outperforming GPT‑5.4 (57.7), Claude Opus 4.6 (53.4), and Gemini 3.1 Pro (54.2). On DeepSearchQA, a deep agentic search benchmark, the model achieves 92.5 (F1), far surpassing GPT‑5.4 (78.6) and Gemini 3.1 Pro (81.9). In the BrowseComp test with an agent swarm, the result is 86.3 versus K2.5’s 78.4. On LiveCodeBench v6 it scores 89.6, on par with the best closed alternatives.
K2.6 is purpose‑built for professional software engineering and autonomous agent systems: the model can continuously solve complex tasks for over 12 hours, with thousands of tool calls (optimization, DevOps, refactoring, cross‑platform development); from text descriptions and mock‑ups it generates deployment‑ready web interfaces with full backend logic and authentication; it orchestrates swarms of sub‑agents for parallel information gathering, research, and analysis, assembling the final results in the required format; and much more.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $1.18 | -30.981 | Launch | ||
262,144.0 tensor |
2 | $1.23 | -14.938 | Launch | ||
262,144.0 |
1 | $1.69 | -30.564 | Launch | ||
262,144.0 tensor |
4 | $1.75 | -6.916 | Launch | ||
262,144.0 |
1 | $2.37 | -28.063 | Launch | ||
262,144.0 tensor |
4 | $3.01 | -6.916 | Launch | ||
262,144.0 |
1 | $3.83 | -28.063 | Launch | ||
262,144.0 |
1 | $4.11 | -27.334 | Launch | ||
262,144.0 tensor |
2 | $4.93 | -12.020 | Launch | ||
262,144.0 tensor |
2 | $9.40 | -8.842 | Launch | ||
262,144.0 tensor |
4 | $19.23 | -0.821 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $1.18 | -54.472 | Launch | ||
262,144.0 tensor |
2 | $1.23 | -26.683 | Launch | ||
262,144.0 |
1 | $1.69 | -54.056 | Launch | ||
262,144.0 tensor |
4 | $1.75 | -12.789 | Launch | ||
262,144.0 |
1 | $2.37 | -51.555 | Launch | ||
262,144.0 tensor |
4 | $3.01 | -12.789 | Launch | ||
262,144.0 |
1 | $3.83 | -51.555 | Launch | ||
262,144.0 |
1 | $4.11 | -50.826 | Launch | ||
262,144.0 tensor |
2 | $4.93 | -23.766 | Launch | ||
262,144.0 tensor |
2 | $9.40 | -20.588 | Launch | ||
262,144.0 tensor |
4 | $19.23 | -6.694 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $1.18 | -109.614 | Launch | ||
262,144.0 tensor |
2 | $1.23 | -54.254 | Launch | ||
262,144.0 |
1 | $1.69 | -109.197 | Launch | ||
262,144.0 tensor |
4 | $1.75 | -26.574 | Launch | ||
262,144.0 |
1 | $2.37 | -106.696 | Launch | ||
262,144.0 tensor |
4 | $3.01 | -26.574 | Launch | ||
262,144.0 |
1 | $3.83 | -106.696 | Launch | ||
262,144.0 |
1 | $4.11 | -105.967 | Launch | ||
262,144.0 tensor |
2 | $4.93 | -51.337 | Launch | ||
262,144.0 tensor |
2 | $9.40 | -48.159 | Launch | ||
262,144.0 tensor |
4 | $19.23 | -20.479 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.