Kimi K2-Instruct-0905 — an update to one of the largest open-source LLMs. The architecture remains largely unchanged: it is a Mixture-of-Experts (MoE) model with 1 trillion parameters, of which only 32 billion are activated for processing each token. The model employs 384 experts, with only 8 most relevant experts selected per token, plus one shared expert. It utilizes the Multi-Head Latent Attention (MLA) mechanism, which significantly reduces the size of the KV cache. As in the previous version, the MuonClip optimizer was used during training, effectively addressing the critical issue of instability in training models at this scale.
The most significant improvements highlighted by the developers include extending the context window from 128K to 256K tokens. Additionally, the model has been specifically optimized for agent-like use cases and programming tasks, with substantial enhancements to the "frontend coding experience"—both in terms of aesthetic quality and practical usability of the generated user interfaces.
The developers' claims are fully supported by benchmark results. On SWE-Bench Verified, the model achieves 69.2% accuracy, significantly surpassing the previous version (65.8%) and competing closely with leading proprietary models such as Claude Sonnet 4 (72.7%) and Claude Opus 4 (72.5%). On Terminal-Bench, the model scores 44.5% accuracy, clearly outperforming competitors: the prior version (37.5%), Qwen3-Coder (37.5%), GLM-4.5 (39.9%), and DeepSeek-V3.1 (31.3%).
Kimi K2-Instruct-0905 is ideally suited for managing autonomous workflows, where the model can independently decompose complex tasks, select appropriate tools, and execute multi-step workflows with minimal human intervention. In software development, the model excels at debugging, code generation, data analysis, and orchestrating development processes. It is particularly effective in frontend development, as it can generate code that addresses not only the technical requirements of a task but also its design aspects.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
There are no configurations for this model yet.
There are no configurations for this model yet.
There are no configurations for this model yet.
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.