MiniMax-M1-80k

reasoning

MiniMax-M1-80K is the most enhanced version of the base model series. It retains the same 456 billion total parameters , with 45.9 billion activated per token , but has been further trained on 7.5 trillion tokens of diverse tasks, including mathematical reasoning , competitive programming, logical reasoning using the SynLogic framework , and real-world software engineering tasks in sandboxed environments based on SWE-bench. Compared to MiniMax-M1-40K , its reasoning budget has been increased from 40,000 to 80,000 tokens .

The model utilizes an enhanced version of the CISPO algorithm , combined with a progressive window expansion strategy . This approach ensures training stability at each stage and enables the model to adapt to increasing task complexity. An important innovation is the Lightning Attention mechanism , which represents a key scientific breakthrough by MiniMax in the field of large language models. It significantly reduces resource consumption during both training and inference — MiniMax-M1-80K consumes only 25% of the FLOPs required by DeepSeek R1 when generating 100,000 tokens , achieving unprecedented efficiency in long-sequence processing. Full reinforcement learning training was completed in just three weeks using 512 H800 GPUs .

MiniMax-M1-80K demonstrates outstanding performance across multiple benchmarks , achieving 86.0% on AIME 2024 and 56.0% on SWE-bench Verified . Particularly impressive are its capabilities in agent-based systems utilizing tool calling , where it scores 62.0% on TAU-bench airline and 63.5% on TAU-bench retail , even outperforming Gemini 2.5 Pro in certain scenarios.These results confirm its suitability for a wide range of research applications and commercial products . Furthermore, the model is supported by modern deployment frameworks , including vLLM and Transformers , and is available under the Apache-2.0 license , making it fully open and unrestricted for use .


Announce Date: 16.06.2025
Parameters: 456B
Experts: 32
Activated: 45.9B
Context: 10240K
Attention Type: Full Attention
VRAM requirements: 289.9 GB using 4 bits quantization
Developer: MiniMax-AI
Transformers Version: 4.45.2
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore MiniMax-M1-80k capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting MiniMax-M1-80k

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.44.512.320 44 524288 320 4 $10.68 Launch
teslah100-4.44.512.320 44 524288 320 4 $20.77 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.