MiniMax M2.5, like its predecessors, is built on a Mixture of Experts (MoE) architecture with a total of 229 billion parameters, yet it activates only 10 billion during each forward pass. This extreme sparsity (activating only ~4% of parameters) allows the model to combine high performance with computational efficiency and resource savings. The architecture utilizes 256 experts, of which 8 are active for each token, and features a context window of up to 200,000 tokens, which is sufficient for handling complex, multi-step tasks and working with extensive documents.
At the core of MiniMax M2.5 lies the philosophy of "intelligence for everyone," realized through a focus on reinforcement learning (RL). The developers confronted the classic "trilemma" of scaling RL: the need to simultaneously ensure high system throughput, training stability, and agent flexibility. Their solution was a proprietary framework named Forge. Forge's architecture introduces an intermediary layer (Middleware) that completely decouples the agent's logic from the training and inference engine. This approach made it possible to train the model on arbitrary agentic scenarios ("scaffolds"), including "black boxes," and endowed M2.5 with a unique ability to effectively generalize skills across thousands of different tools and calling formats.
A key architectural decision was the integration of a context management mechanism directly into the RL loop. Unlike other models where context can "dilute" over long planning horizons, M2.5 learns to treat context management (e.g., compressing history or overwriting) as one of the agent's actions. This allows the model not only to stay within the technical context limit but also to actively maintain focus on critically important information, which is significant for multi-step tasks. The choice of full attention in this paradigm is justified by the need to preserve a holistic picture for decision-making, rather than "guessing" which information to retain and which can be "rotated out" for speed and resource savings.
The results on benchmarks speak for themselves: 80.2% on SWE-Bench Verified (solving real-world GitHub issues) and 51.3% on Multi-SWE-Bench represent the best in the industry, enabling M2.5 to outperform models like Claude Opus 4.6 and GPT-5.2 in specific coding scenarios. On the complex search benchmark BrowseComp, the model achieved 76.3%, and it also demonstrated leading results on the expert search benchmark RISE. But M2.5 stands out not just for its high benchmark scores, but for the judiciousness of its solutions. The model was trained to solve problems optimally; therefore, it uses on average 20% fewer search iterations than its predecessor, M2.1. This is a consequence of the reward function in RL being tuned not only for the correctness of the answer but also for the efficiency of the trajectory, including execution time. As a result, M2.5 demonstrates "architectural thinking," consistently decomposing tasks beforehand.
The use cases for M2.5 span all areas requiring autonomous intelligence. In programming, it covers the full development cycle from architecture to testing. In office work, it involves creating complex reports in Word, presentations in PowerPoint, and financial models in Excel based on company standards. In research, it conducts multi-step analysis using search and synthesizing information from multiple sources. It can be said that M2.5 is a tangible example of "artificial intelligence as an employee," capable of undertaking complex, multi-stage tasks and performing them with a level of quality and speed sufficient for immediate integration into business processes.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
196,608.0 tensor |
3 | $7.36 | 1.876 | Launch | ||
196,608.0 tensor |
2 | $9.41 | 2.742 | Launch | ||
196,608.0 tensor |
8 | $11.55 | 1.916 | Launch | ||
196,608.0 tensor |
3 | $11.73 | 1.876 | Launch | ||
196,608.0 tensor |
3 | $12.38 | 2.689 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.