Ministral-3-14B-Instruct-2512

multimodal

Ministral-3-14B-Instruct-2512 is the flagship model of the Ministral 3 family, delivering performance comparable to the much larger Mistral Small 3.2 24B while being significantly smaller. The model consists of two architectural components: a 13.5B parameter text LLM and a 0.4B parameter visual encoder, providing native multimodality. It supports a context window of up to 256,000 tokens, enabling processing of long documents and extended conversations. Thanks to optimizations for edge computing, the model can run locally, occupying less than 24 GB of VRAM in int4 format (in FP8 format — the format in which the model is released by the developers — the model weights take up about 30 GiB).. It is distributed under the Apache 2.0 license. The architectural uniqueness of Ministral 3 lies in its use of Cascade Distillation — an iterative pruning and knowledge distillation method from a larger parent model (Mistral Small 3.1) into compact child models. This approach achieves performance competitive with models trained on significantly larger token volumes (36 trillion for Qwen3, 15 trillion for Llama3), while Ministral 3 is trained on just 1–3 trillion tokens.

The model demonstrates leading results on key benchmarks. Arena Hard (0.551) evaluates the model’s ability to follow instructions in complex scenarios — here Ministral 3 14B surpasses Qwen3 14B (0.427) and Gemma3 12B (0.436). WildBench (68.5) tests general conversational skills in open domains — the model also ranks first among its peers. On the MATH Maj@1 mathematical benchmark, the model achieves 0.904, second only to Qwen3-VL-8B-Instruct (0.946).

Developers recommend using temperatures below 0.1 for production environments, though higher values are acceptable for creative tasks; limit the number of tools to the minimum necessary. For images, an aspect ratio close to 1:1 is recommended. Use cases include AI assistants and chat systems, as well as advanced agentic functions. Overall, the model is an excellent fit for enterprise‑level solutions requiring multimodal understanding and high performance under constrained resources.


Announce Date: 31.10.2025
Parameters: 14B
Context: 263K
Layers: 40
Attention Type: Full Attention
Developer: Mistral AI
Transformers Version: 5.0.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Ministral-3-14B-Instruct-2512 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Ministral-3-14B-Instruct-2512

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-3.16.96.160
262,144.0
pipeline
3 $1.34 1.111 Launch
teslaa10-4.16.64.160
262,144.0
tensor
4 $1.62 1.588 Launch
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.463 Launch
rtx3090-3.16.96.160
262,144.0
pipeline
3 $2.29 1.111 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.588 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 1.416 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 1.111 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 1.588 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 0.993 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 1.588 Launch
h100-1.16.64.160
262,144.0
1 $3.83 1.416 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.731 Launch
h200-1.16.128.160
262,144.0
1 $4.74 2.788 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.053 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 1.178 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.178 Launch
teslaa100-1.16.128.160
262,144.0
1 $2.50 1.005 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.178 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.178 Launch
h100-1.16.128.160
262,144.0
1 $3.95 1.005 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.320 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.240 Launch
h200-1.16.128.160
262,144.0
1 $4.74 2.378 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.898 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
262,144.0
pipeline
6 $3.50 1.566 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 2.176 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 2.521 Launch
h200-1.16.128.160
262,144.0
1 $4.74 1.811 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.331 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
6 $5.83 1.566 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 2.521 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 2.176 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 2.806 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.