MiniMax-M2.7

reasoning
coding

MiniMax M2.7 is the flagship model from MiniMax, built upon a sparse Mixture-of-Experts (MoE) architecture. With a total volume of 230 billion parameters, only 10 billion (less than 5%) are activated per token. This provides the efficiency and speed characteristic of more compact models while maintaining an impressive volume of knowledge. The developers position M2.7 as the first major language model to have actively participated in its own evolution. That is, it was integrated into its own creation process: the model independently updated its own memory, built dozens of complex skills within the agentic "harness" framework, and assisted the development team in conducting RL experiments, analyzing metrics, and debugging. This closed-loop cycle, wherein the model analyzes its own failures, corrects code, and re-evaluates the result, led to a 30% efficiency increase without human intervention. This approach marks a transition from static, "frozen" models to dynamic systems capable of continuous self-improvement. Unlike most competitors who rely on hybrid or efficient attention mechanisms, MiniMax made a principled decision in the M2 series to use "pure" full attention, sacrificing theoretical computational efficiency for guaranteed stability and high quality in complex scenarios—especially in tasks requiring long reasoning chains and agent interaction.

The key development vector for M2.7 is deep agentic capability. While most models positioned as "agentic" are limited to calling a few external functions, M2.7 is designed to create and manage complex agentic frameworks (Harnesses). The model is capable of independently constructing multi-level workflows, managing teams of multiple agents (Agent Teams), and operating with over 40 complex skills, each of which can exceed 2,000 tokens, while maintaining an impressive 97% instruction-following rate (skill adherence). This allows M2.7 not just to execute individual commands but to take on the role of a "conductor" for complex, multi-component production tasks. Special attention should be paid to the model's focus on professional office tasks (Professional Office Domain). This is not just the ability to open a document but the capacity to perform complex, high-precision editing in Excel, PowerPoint, and Word while preserving formatting and supporting multiple revisions. M2.7 is trained to behave like an analyst: it can read a company's financial statements, identify trends, build a predictive model in Excel, and based on that, prepare a comprehensive presentation and text report. This elevates office automation to a fundamentally new level, closing the gap between understanding a task and delivering a finished business result.

These capabilities are confirmed by outstanding results in independent tests. In the SWE-Pro benchmark, which tests the ability to solve real software engineering tasks in complex repositories, M2.7 scored 56.22%, closely approaching the best results of Claude 4.6 and GPT-5.4, and in the Multi-SWE-Bench benchmark, it even emerged as a leader with a score of 52.7. Thus, M2.7 is a universal tool for professionals that offers the performance of top-tier closed models.

M2.7 is ideally suited for autonomous software development—from code refactoring and log analysis to end-to-end project management using tools. Thanks to native support for Agent Teams, an extensive library of complex skills, and dynamic tool discovery, it serves as a reliable foundation for building multi-level agentic systems capable of coordinating the work of several "specialists" in complex business processes. The model is also oriented towards advanced office automation, including editing Excel, Word, and PowerPoint documents, creating financial models, and generating complex data-driven reports. In research and R&D activities, M2.7 acts as a full-fledged assistant, helping with literature reviews, experiment planning, and results analysis, while enhanced character consistency and emotional intelligence make it an excellent choice for interactive platforms like OpenRoom and other creative projects.


Announce Date: 09.04.2026
Parameters: 229B
Experts: 256
Activated at inference: 10B
Context: 197K
Layers: 62
Attention Type: Full Attention
Developer: MiniMax-AI
Transformers Version: 4.46.1
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore MiniMax-M2.7 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting MiniMax-M2.7

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-3.32.384.240
196,608.0
tensor
3 $7.36 1.870 Launch
h200-2.24.256.240
196,608.0
tensor
2 $9.41 2.737 Launch
h200-2.24.256.240.nvlink
196,608.0
tensor
2 $9.41 2.737 Launch
teslaa100-4.32.384.320.nvlink
196,608.0
tensor
4 $9.50 3.365 Launch
rtx5090-8.44.256.240
196,608.0
tensor
8 $11.55 1.911 Launch
h100-3.32.384.240
196,608.0
tensor
3 $11.73 1.870 Launch
h100nvl-3.24.384.480
196,608.0
tensor
3 $12.38 2.683 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.32.384.320.nvlink
196,608.0
tensor
4 $9.50 1.369 Launch
teslaa100-4.44.512.320
196,608.0
tensor
4 $9.83 1.369 Launch
h200-3.32.512.480
196,608.0
tensor
3 $14.36 3.417 Launch
h100-4.44.512.320
196,608.0
tensor
4 $15.65 1.369 Launch
h100nvl-4.32.384.480
196,608.0
tensor
4 $16.23 2.453 Launch
h200-4.32.768.480.nvlink
196,608.0
tensor
4 $19.23 6.092 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-8.44.704.960.nvlink
196,608.0
tensor
8 $18.78 2.794 Launch
h200-4.32.768.640
196,608.0
tensor
4 $19.25 1.538 Launch
h200-4.32.768.640.nvlink
196,608.0
tensor
4 $19.25 1.538 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.