Ministral-3-14B-Reasoning-2512 is the flagship model in the Ministral 3 lineup. It is a reasoning variant with post-training specifically optimized for solving complex tasks that require multi-step reasoning. The model features a modular architecture consisting of two main components: a 13.5B parameter language model and a 0.4B parameter vision encoder. The model efficiently analyzes images and provides outputs based on visual content. A key technical feature is the use of Cascade Distillation — an iterative distillation and pruning method that derives the model from the parent Mistral Small 3.1 (24B) while preserving high quality and reducing size by more than 40%. The model supports dozens of languages, demonstrates strict adherence to system prompts, and has strong agentic capabilities — built-in function calling support and JSON output. The 256k token context window allows processing of large documents and long-running conversations.
In terms of benchmarks, Ministral-3-14B-Reasoning achieves excellent results. On the AIME 2024 and AIME 2025 math tests, the model reaches 89.8% and 85.0%, respectively, confirming its ability to solve complex Olympiad-level problems. In scientific reasoning (GPQA Diamond) the score is 71.2%, and in programming tasks (LiveCodeBench) — 64.6%. According to the technical report, at the time of release, the model outperforms all known alternatives of comparable size.
The developers recommend, when working with images, maintaining an aspect ratio close to 1:1 (width/height), avoiding overly narrow or wide frames — if necessary, images should be cropped for optimal performance. In multi-step dialogues, it is crucial to keep reasoning traces in context. For most tasks, it is recommended to set a system prompt that defines the reasoning order (example from developers: https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512/blob/main/SYSTEM_PROMPT.txt) and set the sampling temperature to 1, though experimentation is acceptable. When using tools, limit their set to the minimum necessary, avoiding overloading the model with an excessive number of functions. The model is particularly effective in areas related to mathematics, programming, and other tasks that require deep step-by-step reasoning combined with the need to analyze images.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
3 | $1.34 | 1.203 | Launch | ||
262,144.0 tensor |
4 | $1.62 | 1.681 | Launch | ||
262,144.0 pipeline |
6 | $1.65 | 1.556 | Launch | ||
262,144.0 pipeline |
3 | $2.29 | 1.203 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 1.681 | Launch | ||
262,144.0 |
1 | $2.37 | 1.508 | Launch | ||
262,144.0 pipeline |
3 | $2.83 | 1.203 | Launch | ||
262,144.0 tensor |
4 | $2.89 | 1.681 | Launch | ||
262,144.0 tensor |
2 | $2.93 | 1.086 | Launch | ||
262,144.0 tensor |
4 | $3.60 | 1.681 | Launch | ||
262,144.0 |
1 | $3.83 | 1.508 | Launch | ||
262,144.0 |
1 | $4.11 | 1.823 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 3.246 | Launch | ||
262,144.0 |
1 | $4.74 | 2.881 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 5.991 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
6 | $1.65 | 1.017 | Launch | ||
262,144.0 tensor |
4 | $1.75 | 1.142 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 1.142 | Launch | ||
262,144.0 tensor |
4 | $2.97 | 1.142 | Launch | ||
262,144.0 tensor |
4 | $3.68 | 1.142 | Launch | ||
262,144.0 |
1 | $4.11 | 1.285 | Launch | ||
262,144.0 pipeline |
3 | $4.34 | 1.205 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 2.707 | Launch | ||
262,144.0 |
1 | $4.74 | 2.342 | Launch | ||
262,144.0 tensor |
2 | $4.93 | 2.707 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 1.862 | Launch | ||
262,144.0 tensor |
2 | $7.84 | 2.707 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 5.452 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
6 | $3.50 | 1.566 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 2.176 | Launch | ||
262,144.0 tensor |
8 | $4.61 | 2.521 | Launch | ||
262,144.0 |
1 | $4.74 | 1.811 | Launch | ||
262,144.0 tensor |
2 | $4.93 | 2.176 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 1.331 | Launch | ||
262,144.0 pipeline |
6 | $5.83 | 1.566 | Launch | ||
262,144.0 tensor |
8 | $7.51 | 2.521 | Launch | ||
262,144.0 tensor |
2 | $7.84 | 2.176 | Launch | ||
262,144.0 tensor |
2 | $8.17 | 2.806 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 4.921 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.