MiniMax-M2

reasoning

The MiniMax M2 is a foundational model that established the architectural and methodological principles for the entire series. Designed for maximum efficiency in agentic scenarios and programming, the M2 proves that a compact architecture can compete with enormous models given the right approach to training and data.

The model's architecture is a Mixture-of-Experts with 230 billion total parameters, of which only 10 billion are activated per token. A key feature is the use of full attention instead of hybrid mechanisms—the team deliberately chose to forgo sparse/linear attention after experiments showed quality degradation on complex multi-step reasoning and agentic tasks. The model implements Interleaved Thinking, a pattern of alternating cognition where the reasoning mode can occur between steps of generation and tool use, rather than only at the beginning of a dialogue.

The uniqueness of the M2 lies in its role as the foundation for the entire series. It is not merely a standalone model, but the first realization of the MiniMax philosophy that the future lies in "agent-native" LLMs. The core idea embedded in the M2 is to prepare the model not just for text generation, but for solving problems within an agentic framework that requires planning, tool use, and adaptation to feedback. From a practical standpoint, this means the M2 is ideally suited for deployment as the intelligent core of assistants capable of working with documents, analyzing tables, and generating structured responses that require domain understanding.


Announce Date: 22.10.2025
Parameters: 229B
Experts: 256
Activated at inference: 10B
Context: 197K
Layers: 62
Attention Type: Full Attention
Developer: MiniMax-AI
Transformers Version: 4.57.1
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore MiniMax-M2 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting MiniMax-M2

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-3.32.384.240
196,608.0
tensor
3 $7.36 2.064 Launch
h100nvl-2.24.192.240
196,608.0
tensor
2 $8.17 1.111 Launch
h200-2.24.256.240
196,608.0
tensor
2 $9.41 2.931 Launch
rtx5090-8.44.256.240
196,608.0
tensor
8 $11.55 2.105 Launch
h100-3.32.384.240
196,608.0
tensor
3 $11.73 2.064 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.32.384.320.nvlink
196,608.0
tensor
4 $9.50 1.369 Launch
h200-3.32.512.480
196,608.0
tensor
3 $14.36 3.417 Launch
h100-4.44.512.320
196,608.0
tensor
4 $15.65 1.369 Launch
h100nvl-4.32.384.480
196,608.0
tensor
4 $16.23 2.453 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-8.44.704.960.nvlink
196,608.0
tensor
8 $18.78 2.794 Launch
h200-4.32.768.640
196,608.0
tensor
4 $19.25 1.538 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.