Kimi-K2.5

reasoning
multimodal

Kimi K2.5 is built on a Mixture-of-Experts (MoE) architecture with 1 trillion total parameters, of which 32 billion are activated per token (384 experts with 8 active per token), ensuring high sparsity and efficiency. The model supports native INT4 quantization (quantization-aware training), allowing for savings on inference hardware.

The first key feature of Kimi K2.5 is its native multimodality. Unlike many models where the visual component is added in later stages of training, Kimi K2.5 was trained on ~15 trillion mixed visual and text tokens from the very beginning of pre-training, with a fixed token ratio (e.g., 10% visual, 90% text). This leads to better mutual adaptation between modalities and prevents conflict between them. For visual data processing, it uses a three-dimensional visual encoder, MoonViT-3D, capable of processing images and videos within a unified feature space. Coding with Vision: Another feature of K2.5 is its ability to generate code from visual specifications: UI designs, video workflows, screenshots, and diagrams. This capability is particularly in demand in front-end development: K2.5 transforms idea descriptions and visual references into fully functional, interactive interfaces. But the main innovation is the Agent Swarm framework for parallel agent orchestration. It allows Kimi K2.5 to autonomously decompose complex tasks, create an orchestrator, and launch up to 100 parallel sub-agents without predefined roles or manual workflows, assigning them individual tasks. This capability reduces processing time by an average factor of 3 to 4.5, while significantly improving response quality. To ensure truly effective parallelization during the training phase, the Critical Steps metric (analogous to the critical path in a computation graph) was used. This enabled orchestration and task distribution to sub-agents only when it genuinely accelerates and improves the task solution.

According to benchmark results, the model demonstrates leadership in multimodal and agentic scenarios: 1st place on LongVideoBench (79.8%) and LVBench (75.9%) for analyzing extremely long videos (over 2000 frames), 92.3% on OCRBench (text recognition in complex layouts), 86.6% on VideoMMMU (interdisciplinary video understanding). In agentic tasks using Agent Swarm, the model achieves 78.4% on BrowseComp (complex research tasks), surpassing even GPT-5.2 Pro (77.9%). On engineering tasks: 76.8% on SWE-Bench Verified (solving real GitHub issues) and 63.3% on OSWorld-Verified (automating actions in a graphical user interface without external tools).

Use cases for the model encompass: analyzing multi-hour video content; parallel research tasks — simultaneous analysis of hundreds of documents or internet sources; code generation from visual mockups (UI screenshots to working HTML/React); automating computer interaction via GUI (OS navigation, form filling); multimodal analysis of financial reports, scientific articles with charts and diagrams. K2.5's uniqueness lies in the synergy of native multimodality and parallel agent architecture, enabling it to solve tasks that are inaccessible to sequential agents due to the linear growth in execution time.


Announce Date: 01.01.2026
Parameters: 1T
Experts: 384
Activated at inference: 32B
Context: 263K
Layers: 61
Attention Type: Multi-head Latent Attention
Developer: Moonshot AI
Transformers Version: 4.57.1
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Kimi-K2.5 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Kimi-K2.5

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-8.44.704.960.nvlink
262,144.0
tensor
8 $18.78 6.558 Launch
h200-4.32.768.640
262,144.0
tensor
4 $19.25 3.154 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
h200-8.52.1024.1280
262,144.0
tensor
8 $37.41 3.723 Launch
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.