MiniMax-M1-40k

reasoning

MiniMax-M1-40K is a reasoning model based on the Mixture-of-Experts (MoE) architecture with 456 billion total parameters, of which 45.9 billion are activated per token. The model builds upon the base version of the MiniMax-Text-01 series and incorporates the innovative Lightning Attention mechanism, which addresses the fundamental issue of quadratic complexity in traditional transformers. This enables the model to process contexts of up to one million tokens with unprecedented efficiency.

The model was trained using the revolutionary CISPO (Clipped Importance Sampling Policy Optimization) algorithm, which clips sample importance weights instead of performing full token updates. This approach results in twice the training efficiency compared to traditional methods. CISPO allows the model to retain all tokens for gradient computation — particularly important for entropy stabilization and scalable reinforcement learning. A key feature of M1-40K is its balanced reasoning approach with a thought budget of 40,000 tokens, making it ideal for a broad range of complex analytical tasks.

MiniMax-M1-40K excels in mathematical reasoning , achieving a score of 83.3% on AIME 2024 . It also demonstrates strong performance in programming and code analysis, scoring 62.3% on LiveCodeBench and 67.6% on FullStackBench . The model achieves 60.0% on TAU-bench airline and 67.8% on TAU-bench retail , highlighting its suitability for building AI agents and applications that leverage various forms of tool calling .


Announce Date: 16.06.2025
Parameters: 456B
Experts: 32
Activated: 45.9B
Context: 10240K
Attention Type: Full Attention
VRAM requirements: 289.9 GB using 4 bits quantization
Developer: MiniMax-AI
Transformers Version: 4.45.2
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore MiniMax-M1-40k capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting MiniMax-M1-40k

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.44.512.320 44 524288 320 4 $10.68 Launch
teslah100-4.44.512.320 44 524288 320 4 $20.77 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.