MiniMax-M1-80K is the most enhanced version of the base model series. It retains the same 456 billion total parameters , with 45.9 billion activated per token , but has been further trained on 7.5 trillion tokens of diverse tasks, including mathematical reasoning , competitive programming, logical reasoning using the SynLogic framework , and real-world software engineering tasks in sandboxed environments based on SWE-bench. Compared to MiniMax-M1-40K , its reasoning budget has been increased from 40,000 to 80,000 tokens .
The model utilizes an enhanced version of the CISPO algorithm , combined with a progressive window expansion strategy . This approach ensures training stability at each stage and enables the model to adapt to increasing task complexity. An important innovation is the Lightning Attention mechanism , which represents a key scientific breakthrough by MiniMax in the field of large language models. It significantly reduces resource consumption during both training and inference — MiniMax-M1-80K consumes only 25% of the FLOPs required by DeepSeek R1 when generating 100,000 tokens , achieving unprecedented efficiency in long-sequence processing. Full reinforcement learning training was completed in just three weeks using 512 H800 GPUs .
MiniMax-M1-80K demonstrates outstanding performance across multiple benchmarks , achieving 86.0% on AIME 2024 and 56.0% on SWE-bench Verified . Particularly impressive are its capabilities in agent-based systems utilizing tool calling , where it scores 62.0% on TAU-bench airline and 63.5% on TAU-bench retail , even outperforming Gemini 2.5 Pro in certain scenarios.These results confirm its suitability for a wide range of research applications and commercial products . Furthermore, the model is supported by modern deployment frameworks , including vLLM and Transformers , and is available under the Apache-2.0 license , making it fully open and unrestricted for use .
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
44 | 524288 | 320 | 4 | $10.68 | Launch | ||
44 | 524288 | 320 | 4 | $20.77 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Name | vCPU | RAM, MB | Disk, GB | GPU |
---|
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.