Ministral-3-14B-Reasoning-2512

reasoning
multimodal

Ministral-3-14B-Reasoning-2512 is the flagship model in the Ministral 3 lineup. It is a reasoning variant with post-training specifically optimized for solving complex tasks that require multi-step reasoning. The model features a modular architecture consisting of two main components: a 13.5B parameter language model and a 0.4B parameter vision encoder. The model efficiently analyzes images and provides outputs based on visual content. A key technical feature is the use of Cascade Distillation — an iterative distillation and pruning method that derives the model from the parent Mistral Small 3.1 (24B) while preserving high quality and reducing size by more than 40%. The model supports dozens of languages, demonstrates strict adherence to system prompts, and has strong agentic capabilities — built-in function calling support and JSON output. The 256k token context window allows processing of large documents and long-running conversations.

In terms of benchmarks, Ministral-3-14B-Reasoning achieves excellent results. On the AIME 2024 and AIME 2025 math tests, the model reaches 89.8% and 85.0%, respectively, confirming its ability to solve complex Olympiad-level problems. In scientific reasoning (GPQA Diamond) the score is 71.2%, and in programming tasks (LiveCodeBench) — 64.6%. According to the technical report, at the time of release, the model outperforms all known alternatives of comparable size.

The developers recommend, when working with images, maintaining an aspect ratio close to 1:1 (width/height), avoiding overly narrow or wide frames — if necessary, images should be cropped for optimal performance. In multi-step dialogues, it is crucial to keep reasoning traces in context. For most tasks, it is recommended to set a system prompt that defines the reasoning order (example from developers: https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512/blob/main/SYSTEM_PROMPT.txt) and set the sampling temperature to 1, though experimentation is acceptable. When using tools, limit their set to the minimum necessary, avoiding overloading the model with an excessive number of functions. The model is particularly effective in areas related to mathematics, programming, and other tasks that require deep step-by-step reasoning combined with the need to analyze images.


Announce Date: 31.10.2025
Parameters: 14B
Context: 263K
Layers: 40
Attention Type: Full Attention
Developer: Mistral AI
Transformers Version: 5.0.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Ministral-3-14B-Reasoning-2512 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Ministral-3-14B-Reasoning-2512

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 1.203 Launch
teslaa10-4.16.64.160
262,144.0
tensor
4 $1.62 1.681 Launch
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.556 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 1.203 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.681 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 1.508 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 1.203 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 1.681 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 1.086 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 1.681 Launch
h100-1.16.64.160
262,144.0
1 $3.83 1.508 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.823 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 3.246 Launch
h200-1.16.128.160
262,144.0
1 $4.74 2.881 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 5.991 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.017 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 1.142 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.142 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.142 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.142 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.285 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.205 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 2.707 Launch
h200-1.16.128.160
262,144.0
1 $4.74 2.342 Launch
teslaa100-2.24.256.160
262,144.0
tensor
2 $4.93 2.707 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.862 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 2.707 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 5.452 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
262,144.0
pipeline
6 $3.50 1.566 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 2.176 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 2.521 Launch
h200-1.16.128.160
262,144.0
1 $4.74 1.811 Launch
teslaa100-2.24.256.160
262,144.0
tensor
2 $4.93 2.176 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.331 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
6 $5.83 1.566 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 2.521 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 2.176 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 2.806 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 4.921 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.