MiniMax-M2.5

reasoning

MiniMax M2.5, like its predecessors, is built on a Mixture of Experts (MoE) architecture with a total of 229 billion parameters, yet it activates only 10 billion during each forward pass. This extreme sparsity (activating only ~4% of parameters) allows the model to combine high performance with computational efficiency and resource savings. The architecture utilizes 256 experts, of which 8 are active for each token, and features a context window of up to 200,000 tokens, which is sufficient for handling complex, multi-step tasks and working with extensive documents.

At the core of MiniMax M2.5 lies the philosophy of "intelligence for everyone," realized through a focus on reinforcement learning (RL). The developers confronted the classic "trilemma" of scaling RL: the need to simultaneously ensure high system throughput, training stability, and agent flexibility. Their solution was a proprietary framework named Forge. Forge's architecture introduces an intermediary layer (Middleware) that completely decouples the agent's logic from the training and inference engine. This approach made it possible to train the model on arbitrary agentic scenarios ("scaffolds"), including "black boxes," and endowed M2.5 with a unique ability to effectively generalize skills across thousands of different tools and calling formats.

A key architectural decision was the integration of a context management mechanism directly into the RL loop. Unlike other models where context can "dilute" over long planning horizons, M2.5 learns to treat context management (e.g., compressing history or overwriting) as one of the agent's actions. This allows the model not only to stay within the technical context limit but also to actively maintain focus on critically important information, which is significant for multi-step tasks. The choice of full attention in this paradigm is justified by the need to preserve a holistic picture for decision-making, rather than "guessing" which information to retain and which can be "rotated out" for speed and resource savings.

The results on benchmarks speak for themselves: 80.2% on SWE-Bench Verified (solving real-world GitHub issues) and 51.3% on Multi-SWE-Bench represent the best in the industry, enabling M2.5 to outperform models like Claude Opus 4.6 and GPT-5.2 in specific coding scenarios. On the complex search benchmark BrowseComp, the model achieved 76.3%, and it also demonstrated leading results on the expert search benchmark RISE. But M2.5 stands out not just for its high benchmark scores, but for the judiciousness of its solutions. The model was trained to solve problems optimally; therefore, it uses on average 20% fewer search iterations than its predecessor, M2.1. This is a consequence of the reward function in RL being tuned not only for the correctness of the answer but also for the efficiency of the trajectory, including execution time. As a result, M2.5 demonstrates "architectural thinking," consistently decomposing tasks beforehand.

The use cases for M2.5 span all areas requiring autonomous intelligence. In programming, it covers the full development cycle from architecture to testing. In office work, it involves creating complex reports in Word, presentations in PowerPoint, and financial models in Excel based on company standards. In research, it conducts multi-step analysis using search and synthesizing information from multiple sources. It can be said that M2.5 is a tangible example of "artificial intelligence as an employee," capable of undertaking complex, multi-stage tasks and performing them with a level of quality and speed sufficient for immediate integration into business processes.


Announce Date: 12.02.2026
Parameters: 229B
Experts: 256
Activated at inference: 10B
Context: 197K
Layers: 62
Attention Type: Full Attention
Developer: MiniMax-AI
Transformers Version: 4.46.1
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore MiniMax-M2.5 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting MiniMax-M2.5

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-3.32.384.240
196,608.0
tensor
3 $7.36 1.876 Launch
h200-2.24.256.240
196,608.0
tensor
2 $9.41 2.742 Launch
rtx5090-8.44.256.240
196,608.0
tensor
8 $11.55 1.916 Launch
h100-3.32.384.240
196,608.0
tensor
3 $11.73 1.876 Launch
h100nvl-3.24.384.480
196,608.0
tensor
3 $12.38 2.689 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.32.384.320.nvlink
196,608.0
tensor
4 $9.50 1.369 Launch
h200-3.32.512.480
196,608.0
tensor
3 $14.36 3.417 Launch
h100-4.44.512.320
196,608.0
tensor
4 $15.65 1.369 Launch
h100nvl-4.32.384.480
196,608.0
tensor
4 $16.23 2.453 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-8.44.704.960.nvlink
196,608.0
tensor
8 $18.78 2.796 Launch
h200-4.32.768.640
196,608.0
tensor
4 $19.25 1.540 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.