NVIDIA-Nemotron-3-Super-120B-A12B

reasoning

The NVIDIA Nemotron 3 Super 120B-A12B represents a flagship model in NVIDIA's family of open LLMs, designed to tackle tasks requiring deep reasoning, complex tool interaction, and processing large volumes of data. The model utilizes an innovative hybrid architecture, combining sparse Mixture-of-Experts (MoE) layers, Mamba-2 state-space blocks, and a limited number of traditional attention layers. This approach allows it to scale the total number of parameters to 120 billion while maintaining low inference costs by activating only 12 billion of them when processing each token.

A key architectural innovation is the use of Latent MoE. The model consists of 88 layers organized in a periodic, alternating structure: the majority of layers are Mamba-2 blocks with linear complexity relative to sequence length. Strategically placed global attention layers (Grouped-Query Attention, 32 query / 2 KV heads) act as "anchors," ensuring the preservation of important dependencies and information exchange between distant parts of the context. A unique feature is the presence of MoE layers without an attention mechanism: in these layers, routing and expert computations are performed exclusively in a compressed latent space (projecting from 4096 down to 1024 dimensions), allowing 22 out of 512 experts to be activated while minimizing overhead. All resource-intensive operations—routing, expert computation, and all-to-all communication—are performed in this compressed space, further reducing memory consumption and accelerating inference without sacrificing quality. This hybrid design balances speed, memory, and accuracy, optimizing long-context performance and making the model ideal for Retrieval-Augmented Generation (RAG) tasks and large-scale document analysis.

The uniqueness of Nemotron 3 Super is underscored by the application of techniques rarely seen in open models. First, it is one of the first models to undergo a full pre-training cycle on 25 trillion tokens, where a large portion of the data was represented in the 4-bit floating point format NVFP4. Second is Multi-Token Prediction (MTP). The model is trained to predict multiple future tokens simultaneously, which not only improves training quality and prediction but also serves as a built-in speculative decoding system to accelerate response generation. Third, the post-training process (RLHF) utilized a specially trained generative reward model—Qwen3-Nemotron-235B-A22B-GenRM-2603—built upon Qwen3-235B-A22B-Thinking-2507 and specifically trained to evaluate response quality. This enabled fine-tuning of the model's behavior, enhancing its "helpfulness" and instruction-following capabilities.

Based on benchmark results, Nemotron-3-Super demonstrates outstanding performance. The model achieves top positions in mathematical reasoning benchmarks (AIME25, HMMT), where it outperforms much larger models. In programming tasks (LiveCodeBench) and agentic capabilities (SWE-Bench), it significantly surpasses counterparts like GPT-OSS-120B, confirming its superiority in solving practical tasks. The model's key advantage is its inference speed (up to 2.2x faster than comparable models) while maintaining competitive quality. Its long-context performance is particularly noteworthy: in the RULER test on 1 million tokens, it achieves 91.75%, substantially ahead of competitors.

Thanks to its efficient architecture and support for a 1-million-token context, the model becomes an ideal choice for developing autonomous AI agents, automating IT ticketing, code writing and review, and building complex RAG systems that operate on massive amounts of unstructured information.


Announce Date: 10.03.2026
Parameters: 124B
Experts: 512
Activated at inference: 12B
Context: 263K
Layers: 88, using full attention: 8, using no attention: 40
Attention Type: Hybrid Attention
Mamba Type: Mamba 2
Developer: NVIDIA
Transformers Version: 4.57.6
vLLM Version: 0.17.1
License: NVIDIA Nemotron Open Model License

Public endpoint

Use our pre-built public endpoints for free to test inference and explore NVIDIA-Nemotron-3-Super-120B-A12B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting NVIDIA-Nemotron-3-Super-120B-A12B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 0.996 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 3.312 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 3.312 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 3.312 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 3.312 Launch
teslav100-3.64.256.320
262,144.0
pipeline
3 $3.89 4.470 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 5.952 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 4.470 Launch
teslav100-4.32.96.160
262,144.0
tensor
4 $4.35 16.653 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 32.311 Launch
h200-1.16.128.160
262,144.0
1 $4.74 25.548 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 16.653 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 32.311 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
h200-1.16.128.240
262,144.0
1 $4.74 2.254 Launch
teslaa100-2.24.256.240
262,144.0
tensor
2 $4.93 9.017 Launch
rtx4090-8.44.256.240
262,144.0
tensor
8 $7.52 15.410 Launch
h100-2.24.256.240
262,144.0
tensor
2 $7.85 9.017 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 20.691 Launch
rtx5090-6.44.256.240
262,144.0
pipeline
6 $8.86 17.726 Launch
rtx5090-8.44.256.240
262,144.0
tensor
8 $11.55 42.093 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-4.16.256.480
262,144.0
tensor
4 $9.17 22.120 Launch
h200-2.24.256.320
262,144.0
tensor
2 $9.42 8.594 Launch
h100nvl-3.24.384.480
262,144.0
pipeline
3 $12.38 7.436 Launch
h100-4.16.256.480
262,144.0
tensor
4 $14.99 22.120 Launch
h100nvl-4.32.384.480
262,144.0
tensor
4 $16.23 45.468 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.