The NVIDIA Nemotron 3 Super 120B-A12B represents a flagship model in NVIDIA's family of open LLMs, designed to tackle tasks requiring deep reasoning, complex tool interaction, and processing large volumes of data. The model utilizes an innovative hybrid architecture, combining sparse Mixture-of-Experts (MoE) layers, Mamba-2 state-space blocks, and a limited number of traditional attention layers. This approach allows it to scale the total number of parameters to 120 billion while maintaining low inference costs by activating only 12 billion of them when processing each token.
A key architectural innovation is the use of Latent MoE. The model consists of 88 layers organized in a periodic, alternating structure: the majority of layers are Mamba-2 blocks with linear complexity relative to sequence length. Strategically placed global attention layers (Grouped-Query Attention, 32 query / 2 KV heads) act as "anchors," ensuring the preservation of important dependencies and information exchange between distant parts of the context. A unique feature is the presence of MoE layers without an attention mechanism: in these layers, routing and expert computations are performed exclusively in a compressed latent space (projecting from 4096 down to 1024 dimensions), allowing 22 out of 512 experts to be activated while minimizing overhead. All resource-intensive operations—routing, expert computation, and all-to-all communication—are performed in this compressed space, further reducing memory consumption and accelerating inference without sacrificing quality. This hybrid design balances speed, memory, and accuracy, optimizing long-context performance and making the model ideal for Retrieval-Augmented Generation (RAG) tasks and large-scale document analysis.
The uniqueness of Nemotron 3 Super is underscored by the application of techniques rarely seen in open models. First, it is one of the first models to undergo a full pre-training cycle on 25 trillion tokens, where a large portion of the data was represented in the 4-bit floating point format NVFP4. Second is Multi-Token Prediction (MTP). The model is trained to predict multiple future tokens simultaneously, which not only improves training quality and prediction but also serves as a built-in speculative decoding system to accelerate response generation. Third, the post-training process (RLHF) utilized a specially trained generative reward model—Qwen3-Nemotron-235B-A22B-GenRM-2603—built upon Qwen3-235B-A22B-Thinking-2507 and specifically trained to evaluate response quality. This enabled fine-tuning of the model's behavior, enhancing its "helpfulness" and instruction-following capabilities.
Based on benchmark results, Nemotron-3-Super demonstrates outstanding performance. The model achieves top positions in mathematical reasoning benchmarks (AIME25, HMMT), where it outperforms much larger models. In programming tasks (LiveCodeBench) and agentic capabilities (SWE-Bench), it significantly surpasses counterparts like GPT-OSS-120B, confirming its superiority in solving practical tasks. The model's key advantage is its inference speed (up to 2.2x faster than comparable models) while maintaining competitive quality. Its long-context performance is particularly noteworthy: in the RULER test on 1 million tokens, it achieves 91.75%, substantially ahead of competitors.
Thanks to its efficient architecture and support for a 1-million-token context, the model becomes an ideal choice for developing autonomous AI agents, automating IT ticketing, code writing and review, and building complex RAG systems that operate on massive amounts of unstructured information.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
6 | $1.65 | 0.996 | Launch | ||
262,144.0 tensor |
4 | $1.75 | 3.312 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 3.312 | Launch | ||
262,144.0 tensor |
4 | $2.97 | 3.312 | Launch | ||
262,144.0 tensor |
4 | $3.68 | 3.312 | Launch | ||
262,144.0 pipeline |
3 | $3.89 | 4.470 | Launch | ||
262,144.0 |
1 | $4.11 | 5.952 | Launch | ||
262,144.0 pipeline |
3 | $4.34 | 4.470 | Launch | ||
262,144.0 tensor |
4 | $4.35 | 16.653 | Launch | ||
262,144.0 tensor |
2 | $4.61 | 32.311 | Launch | ||
262,144.0 |
1 | $4.74 | 25.548 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 16.653 | Launch | ||
262,144.0 tensor |
2 | $7.84 | 32.311 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $4.74 | 2.254 | Launch | ||
262,144.0 tensor |
2 | $4.93 | 9.017 | Launch | ||
262,144.0 tensor |
8 | $7.52 | 15.410 | Launch | ||
262,144.0 tensor |
2 | $7.85 | 9.017 | Launch | ||
262,144.0 tensor |
2 | $8.17 | 20.691 | Launch | ||
262,144.0 pipeline |
6 | $8.86 | 17.726 | Launch | ||
262,144.0 tensor |
8 | $11.55 | 42.093 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
4 | $9.17 | 22.120 | Launch | ||
262,144.0 tensor |
2 | $9.42 | 8.594 | Launch | ||
262,144.0 pipeline |
3 | $12.38 | 7.436 | Launch | ||
262,144.0 tensor |
4 | $14.99 | 22.120 | Launch | ||
262,144.0 tensor |
4 | $16.23 | 45.468 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.