Kimi-K2-0905

Kimi K2-Instruct-0905 — an update to one of the largest open-source LLMs. The architecture remains largely unchanged: it is a Mixture-of-Experts (MoE) model with 1 trillion parameters, of which only 32 billion are activated for processing each token. The model employs 384 experts, with only 8 most relevant experts selected per token, plus one shared expert. It utilizes the Multi-Head Latent Attention (MLA) mechanism, which significantly reduces the size of the KV cache. As in the previous version, the MuonClip optimizer was used during training, effectively addressing the critical issue of instability in training models at this scale.

The most significant improvements highlighted by the developers include extending the context window from 128K to 256K tokens. Additionally, the model has been specifically optimized for agent-like use cases and programming tasks, with substantial enhancements to the "frontend coding experience"—both in terms of aesthetic quality and practical usability of the generated user interfaces.

The developers' claims are fully supported by benchmark results. On SWE-Bench Verified, the model achieves 69.2% accuracy, significantly surpassing the previous version (65.8%) and competing closely with leading proprietary models such as Claude Sonnet 4 (72.7%) and Claude Opus 4 (72.5%). On Terminal-Bench, the model scores 44.5% accuracy, clearly outperforming competitors: the prior version (37.5%), Qwen3-Coder (37.5%), GLM-4.5 (39.9%), and DeepSeek-V3.1 (31.3%).

Kimi K2-Instruct-0905 is ideally suited for managing autonomous workflows, where the model can independently decompose complex tasks, select appropriate tools, and execute multi-step workflows with minimal human intervention. In software development, the model excels at debugging, code generation, data analysis, and orchestrating development processes. It is particularly effective in frontend development, as it can generate code that addresses not only the technical requirements of a task but also its design aspects.


Announce Date: 05.09.2025
Parameters: 1000B
Experts: 384
Activated: 32B
Context: 263K
Attention Type: Multi-head Latent Attention
VRAM requirements: 482.1 GB using 4 bits quantization
Developer: Moonshot AI
Transformers Version: 4.51.3
License: MIT

Public endpoint

Use our pre-built public endpoints to test inference and explore Kimi-K2-0905 capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Kimi-K2-0905

There are no configurations for this model yet.
There are no configurations for this model yet.
There are no configurations for this model yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.