Ministral-3-8B-Reasoning-2512 is built on the same architecture as the larger model: the language part has 8.4 billion parameters, and the vision encoder has 0.4 billion parameters. The model was obtained via cascade distillation from Mistral Small 3.1 (24B) through an intermediate stage — first the parent model is pruned to 14B, then further pruned to 8B with parallel knowledge distillation at each step. This two-stage process ensures effective knowledge transfer while significantly reducing training compute costs. The model is a full-fledged reasoning version, having undergone post-training to solve tasks requiring complex reasoning — mathematics, programming, natural sciences. It supports dozens of languages, strictly follows system prompts, and offers agentic capabilities with native support for function calling and JSON output. The 256k token context window allows processing large volumes of information in a single session.
On the LiveCodeBench benchmark (assessing the ability to generate and understand code), Ministral-3-8B scores 0.616, outperforming Qwen3-VL-8B-Thinking with 0.580. On the AIME25 and AIME24 math tests, the model achieves 0.787 and 0.860 respectively, comparable to Qwen3-VL-8B-Thinking (0.798 and 0.860). On GPQA Diamond, the result is 0.668, slightly below the aforementioned competitor's 0.671.
The developers recommend that when working with visual input, maintain an aspect ratio close to 1:1 and crop images as needed for optimal performance. For maximum reasoning efficiency, it is recommended to use the provided system prompt at https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512/blob/main/SYSTEM_PROMPT.txt , supplementing it with custom instructions to clearly define the environment and use case, including guidelines for effective tool use in agentic systems. For multi-step interactions, reasoning traces must be preserved in the dialogue context. The recommended sampling temperature is 0.7 for most environments. As with other models in the family, the set of tools used should be clearly defined and limited to the minimum necessary.
Ministral-3-8B is ideally suited for local systems, combining versatility with efficiency. Key use cases include: chat interfaces in resource-constrained environments, local AI assistants for everyday use, image/document description and understanding, translation and content generation, specialized agentic applications, as well as fine-tuning for specific tasks.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
4 | $0.96 | 1.199 | Launch | ||
262,144.0 tensor |
4 | $1.26 | 1.199 | Launch | ||
262,144.0 pipeline |
3 | $1.34 | 1.484 | Launch | ||
262,144.0 tensor |
4 | $1.57 | 2.046 | Launch | ||
262,144.0 pipeline |
3 | $2.29 | 1.484 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 2.046 | Launch | ||
262,144.0 |
1 | $2.37 | 1.843 | Launch | ||
262,144.0 pipeline |
3 | $2.83 | 1.484 | Launch | ||
262,144.0 tensor |
4 | $2.89 | 2.046 | Launch | ||
262,144.0 tensor |
2 | $2.93 | 1.346 | Launch | ||
262,144.0 tensor |
4 | $3.60 | 2.046 | Launch | ||
262,144.0 |
1 | $3.83 | 1.843 | Launch | ||
262,144.0 |
1 | $4.11 | 2.214 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 3.887 | Launch | ||
262,144.0 |
1 | $4.74 | 3.458 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 7.117 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
3 | $1.34 | 1.089 | Launch | ||
262,144.0 tensor |
4 | $1.62 | 1.651 | Launch | ||
262,144.0 pipeline |
6 | $1.65 | 1.504 | Launch | ||
262,144.0 pipeline |
3 | $2.29 | 1.089 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 1.651 | Launch | ||
262,144.0 |
1 | $2.37 | 1.448 | Launch | ||
262,144.0 pipeline |
3 | $2.83 | 1.089 | Launch | ||
262,144.0 tensor |
4 | $2.89 | 1.651 | Launch | ||
262,144.0 tensor |
4 | $3.60 | 1.651 | Launch | ||
262,144.0 |
1 | $3.83 | 1.448 | Launch | ||
262,144.0 |
1 | $4.11 | 1.818 | Launch | ||
262,144.0 pipeline |
3 | $4.34 | 1.724 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 3.492 | Launch | ||
262,144.0 |
1 | $4.74 | 3.063 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 2.498 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 6.721 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
6 | $1.65 | 1.123 | Launch | ||
262,144.0 tensor |
4 | $1.75 | 1.270 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 1.270 | Launch | ||
262,144.0 |
1 | $2.50 | 1.067 | Launch | ||
262,144.0 tensor |
4 | $2.97 | 1.270 | Launch | ||
262,144.0 tensor |
4 | $3.68 | 1.270 | Launch | ||
262,144.0 |
1 | $3.95 | 1.067 | Launch | ||
262,144.0 |
1 | $4.11 | 1.438 | Launch | ||
262,144.0 pipeline |
3 | $4.34 | 1.343 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 3.111 | Launch | ||
262,144.0 |
1 | $4.74 | 2.682 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 2.117 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 6.341 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.