Ministral-3-8B-Reasoning-2512

reasoning
multimodal

Ministral-3-8B-Reasoning-2512 is built on the same architecture as the larger model: the language part has 8.4 billion parameters, and the vision encoder has 0.4 billion parameters. The model was obtained via cascade distillation from Mistral Small 3.1 (24B) through an intermediate stage — first the parent model is pruned to 14B, then further pruned to 8B with parallel knowledge distillation at each step. This two-stage process ensures effective knowledge transfer while significantly reducing training compute costs. The model is a full-fledged reasoning version, having undergone post-training to solve tasks requiring complex reasoning — mathematics, programming, natural sciences. It supports dozens of languages, strictly follows system prompts, and offers agentic capabilities with native support for function calling and JSON output. The 256k token context window allows processing large volumes of information in a single session.

On the LiveCodeBench benchmark (assessing the ability to generate and understand code), Ministral-3-8B scores 0.616, outperforming Qwen3-VL-8B-Thinking with 0.580. On the AIME25 and AIME24 math tests, the model achieves 0.787 and 0.860 respectively, comparable to Qwen3-VL-8B-Thinking (0.798 and 0.860). On GPQA Diamond, the result is 0.668, slightly below the aforementioned competitor's 0.671.

The developers recommend that when working with visual input, maintain an aspect ratio close to 1:1 and crop images as needed for optimal performance. For maximum reasoning efficiency, it is recommended to use the provided system prompt at https://huggingface.co/mistralai/Ministral-3-8B-Reasoning-2512/blob/main/SYSTEM_PROMPT.txt , supplementing it with custom instructions to clearly define the environment and use case, including guidelines for effective tool use in agentic systems. For multi-step interactions, reasoning traces must be preserved in the dialogue context. The recommended sampling temperature is 0.7 for most environments. As with other models in the family, the set of tools used should be clearly defined and limited to the minimum necessary.

Ministral-3-8B is ideally suited for local systems, combining versatility with efficiency. Key use cases include: chat interfaces in resource-constrained environments, local AI assistants for everyday use, image/document description and understanding, translation and content generation, specialized agentic applications, as well as fine-tuning for specific tasks.


Announce Date: 31.10.2025
Parameters: 9B
Context: 263K
Layers: 34
Attention Type: Full Attention
Developer: Mistral AI
Transformers Version: 5.0.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Ministral-3-8B-Reasoning-2512 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Ministral-3-8B-Reasoning-2512

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 1.199 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 1.199 Launch
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 1.484 Launch
teslaa10-4.12.48.160
262,144.0
tensor
4 $1.57 2.046 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 1.484 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 2.046 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 1.843 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 1.484 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 2.046 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 1.346 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 2.046 Launch
h100-1.16.64.160
262,144.0
1 $3.83 1.843 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 2.214 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 3.887 Launch
h200-1.16.128.160
262,144.0
1 $4.74 3.458 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 7.117 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 1.089 Launch
teslaa10-4.16.64.160
262,144.0
tensor
4 $1.62 1.651 Launch
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.504 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 1.089 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.651 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 1.448 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 1.089 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 1.651 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 1.651 Launch
h100-1.16.64.160
262,144.0
1 $3.83 1.448 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.818 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.724 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 3.492 Launch
h200-1.16.128.160
262,144.0
1 $4.74 3.063 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 2.498 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 6.721 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.123 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 1.270 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.270 Launch
teslaa100-1.16.128.160
262,144.0
1 $2.50 1.067 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.270 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.270 Launch
h100-1.16.128.160
262,144.0
1 $3.95 1.067 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.438 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.343 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 3.111 Launch
h200-1.16.128.160
262,144.0
1 $4.74 2.682 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 2.117 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 6.341 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.