Kimi-K2.6

reasoning
multimodal
coding

Kimi K2.6, the flagship model from Moonshot AI, is built on a sparse Mixture‑of‑Experts (MoE) architecture with 1 trillion total parameters, of which only 32 billion are activated per token. The model comprises 61 layers (including one dense layer), 384 experts (8 routed and 1 shared per token), and 64 attention heads. As in previous versions, it employs Multi‑head Latent Attention (MLA), which compresses the key‑value cache into a low‑rank latent space, dramatically reducing memory usage on long contexts of up to 262,144 tokens. To stabilize training at the trillion‑parameter scale, the MuonClip optimizer is used, and the built‑in 400M‑parameter visual encoder MoonViT provides native image and video understanding without external modules.

K2.6 undergoes Quantization‑Aware Training (QAT) directly during post‑training, making it optimized for 4‑bit weights and it is released by the developers in exactly this format. The officially supported inference frameworks are vLLM, SGLang, and KTransformers. The model offers two operating modes. Thinking Mode delivers a full Chain‑of‑Thought reasoning cycle (recommended temperature 1.0) and is intended for complex multi‑step tasks. Instant Mode provides fast, deterministic responses (temperature 0.6, top‑p 0.95) for interactive scenarios. The ability to perform “interleaved thinking”—reasoning between tool calls rather than forming a single monolithic plan at the start—is what makes it efficient on workflows spanning thousands of steps. The preserve_thinking option allows full reasoning traces to be kept across consecutive tool invocations, which is critical for long programming and agentic sessions.

K2.6 extends the same architectural principles as K2.5 but brings major upgrades: a qualitative leap in long‑horizon coding, scaling the Agent Swarm from 100 to 300 sub‑agents, and increasing the number of coordinated steps from 1,500 to 4,000. These improvements place K2.6 among the leading open models and allow it to compete on equal terms with top closed systems. On Humanity’s Last Exam (HLE‑Full) with tools, the model scores 54.0, ahead of GPT‑5.4 (52.1), Claude Opus 4.6 (53.0), and Gemini 3.1 Pro (51.4). On SWE‑Bench Pro, which measures software engineering capabilities, K2.6 reaches 58.6, outperforming GPT‑5.4 (57.7), Claude Opus 4.6 (53.4), and Gemini 3.1 Pro (54.2). On DeepSearchQA, a deep agentic search benchmark, the model achieves 92.5 (F1), far surpassing GPT‑5.4 (78.6) and Gemini 3.1 Pro (81.9). In the BrowseComp test with an agent swarm, the result is 86.3 versus K2.5’s 78.4. On LiveCodeBench v6 it scores 89.6, on par with the best closed alternatives.

K2.6 is purpose‑built for professional software engineering and autonomous agent systems: the model can continuously solve complex tasks for over 12 hours, with thousands of tool calls (optimization, DevOps, refactoring, cross‑platform development); from text descriptions and mock‑ups it generates deployment‑ready web interfaces with full backend logic and authentication; it orchestrates swarms of sub‑agents for parallel information gathering, research, and analysis, assembling the final results in the required format; and much more.


Announce Date: 14.04.2026
Parameters: 2T
Experts: 384
Activated at inference: 32B
Context: 263K
Layers: 61
Attention Type: Multi-head Latent Attention
Developer: Moonshot AI
Transformers Version: 4.56.2
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Kimi-K2.6 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Kimi-K2.6

Prices:
Name GPU Price, hour TPS Max Concurrency
rtx4090-1.32.64.160
262,144.0
1 $1.18 -30.981 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 -14.938 Launch
rtx5090-1.32.64.160
262,144.0
1 $1.69 -30.564 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 -6.916 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 -28.063 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 -6.916 Launch
h100-1.16.64.160
262,144.0
1 $3.83 -28.063 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 -27.334 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 -12.020 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 -8.842 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 -0.821 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtx4090-1.32.64.160
262,144.0
1 $1.18 -54.472 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 -26.683 Launch
rtx5090-1.32.64.160
262,144.0
1 $1.69 -54.056 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 -12.789 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 -51.555 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 -12.789 Launch
h100-1.16.64.160
262,144.0
1 $3.83 -51.555 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 -50.826 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 -23.766 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 -20.588 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 -6.694 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtx4090-1.32.64.160
262,144.0
1 $1.18 -109.614 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 -54.254 Launch
rtx5090-1.32.64.160
262,144.0
1 $1.69 -109.197 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 -26.574 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 -106.696 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 -26.574 Launch
h100-1.16.64.160
262,144.0
1 $3.83 -106.696 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 -105.967 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 -51.337 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 -48.159 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 -20.479 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.