MiniMax-M2.1

reasoning

MiniMax-M2.1 is a foundation language model specifically designed for coding tasks, with a focus on deeply understanding the full cycle of real-world software development. The training of M2.1 was aimed at creating a truly useful AI assistant capable of working with complex, multilingual projects and adapting to a developer's workflow. At the core of the model lies a full attention architecture, a choice the developers justify by the critical importance of understanding global context for complex engineering tasks. Unlike sparse architectures, full attention allows the model to simultaneously consider thousands of tokens from different files, specifications, commit histories, and instructions.

The main distinction of M2.1 from other LLMs and the previous M2 version is the shift from evaluating whether "code compiles" to assessing whether "the application works and can be used." Specifically for this purpose, the authors developed a new benchmark, VIBE Bench, which tests not just syntax but the model's ability to create a fully functioning application, tested by an automated agent (Agent-as-a-Verifier) on three levels: execution, interactivity, and visual quality. In this test, M2.1 confidently ranks first, outperforming much more powerful open-source and proprietary models. M2.1 also solves the problem of "overfitting to a specific framework," demonstrating versatility and consistently high results across different tools and programming languages. On the Multi-SWE-Bench benchmark, which evaluates the ability to solve real-world issues from repositories in multiple languages, M2.1 significantly surpassed many closed-source models and set a new standard, confirming its effectiveness in software development.

Thanks to its unique features, MiniMax-M2.1 opens up a wide range of possibilities for developers, including full-stack application development—generating prototypes and production-ready applications; automation of code review to find bugs and bottlenecks in code; refactoring and optimization of legacy code while simultaneously improving performance and readability; writing tests and documentation; and serving as a multilingual AI agent integrated into IDEs (e.g., Cursor) or CI/CD pipelines to handle complex, multi-step tasks that require working with multiple languages and frameworks simultaneously.


Announce Date: 20.12.2025
Parameters: 229B
Experts: 256
Activated at inference: 10B
Context: 197K
Layers: 62
Attention Type: Full Attention
Developer: MiniMax-AI
Transformers Version: 4.46.1
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore MiniMax-M2.1 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting MiniMax-M2.1

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-3.32.384.240
196,608.0
tensor
3 $7.36 1.983 Launch
h100nvl-2.24.192.240
196,608.0
tensor
2 $8.17 1.030 Launch
h200-2.24.256.240
196,608.0
tensor
2 $9.41 2.850 Launch
rtx5090-8.44.256.240
196,608.0
tensor
8 $11.55 2.024 Launch
h100-3.32.384.240
196,608.0
tensor
3 $11.73 1.983 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.32.384.320.nvlink
196,608.0
tensor
4 $9.50 1.369 Launch
h200-3.32.512.480
196,608.0
tensor
3 $14.36 3.417 Launch
h100-4.44.512.320
196,608.0
tensor
4 $15.65 1.369 Launch
h100nvl-4.32.384.480
196,608.0
tensor
4 $16.23 2.453 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-8.44.704.960.nvlink
196,608.0
tensor
8 $18.78 2.794 Launch
h200-4.32.768.640
196,608.0
tensor
4 $19.25 1.538 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.