Nemotron-3 Nano-30B is a new-generation LLM from NVIDIA. The model's key feature is its innovative architecture, which integrates Mamba2 layers, Transformer layers, and Mixture-of-Experts (MoE) technology into a unified compute cluster. This structure allows the model to efficiently process massive datasets while maintaining logical coherence and high throughput. The model has a total parameter count of 32 billion, but thanks to MoE routing, only an active subset of approximately 3.5 billion parameters is engaged for generating each individual token. This provides a unique balance: the model possesses the "knowledge" and capacity of a 30B-scale network but consumes computational resources on par with compact models optimized for fast inference. The model was trained on a dataset of about 25 trillion tokens, encompassing 43 programming languages and more than 19 natural languages.
Compared to Nemotron v2, the new version offers an MoE architecture instead of a dense one, delivering 4 times greater throughput. Another key capability of Nemotron-3 Nano is support for a context window of up to 1 million tokens. This expansion ideally showcases the capabilities of Mamba2 layers, which process long sequences with minimal memory overhead. A crucial stage in the model's creation was Multi-environment Reinforcement Learning using the NeMo Gym library. The model was trained not just to answer questions, but to perform action sequences: calling tools, writing functional code, and constructing multi-step plans. This makes its behavior more predictable and reliable in complex scenarios where step-by-step result verification is required.
On the AIME25 benchmark (American Invitational Mathematics Examination), which tests mathematical and quantitative reasoning, Nemotron 3 Nano achieves 99.2% accuracy with tool use, surpassing GPT-OSS-20B at 98.7%. On LiveCodeBench (v6 2025-08–2025–05), the model scores 68.2%, outperforming Qwen3-30B (66.0%) and GPT-OSS-20B (61.0%). On other benchmarks, the model either leads or is on par with its counterparts.
Given its architectural advantages and NVIDIA's recommendations, the model is ideally suited for the following tasks: Agentic Systems and Orchestration, Long-Context RAG, Local/On-Prem and Edge Computing, Code Generation, and Data Structuring.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 |
1 | $0.53 | 1.606 | Launch | ||
262,144.0 tensor |
2 | $0.54 | 4.646 | Launch | ||
262,144.0 tensor |
2 | $0.57 | 4.646 | Launch | ||
262,144.0 |
1 | $0.83 | 1.606 | Launch | ||
262,144.0 pipeline |
3 | $0.84 | 3.612 | Launch | ||
262,144.0 |
1 | $1.02 | 1.606 | Launch | ||
262,144.0 tensor |
4 | $1.12 | 8.398 | Launch | ||
262,144.0 |
1 | $1.20 | 6.264 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 13.961 | Launch | ||
262,144.0 pipeline |
3 | $1.43 | 1.865 | Launch | ||
262,144.0 |
1 | $1.59 | 6.264 | Launch | ||
262,144.0 tensor |
4 | $1.82 | 6.070 | Launch | ||
262,144.0 |
1 | $2.37 | 34.207 | Launch | ||
262,144.0 |
1 | $3.83 | 34.207 | Launch | ||
262,144.0 |
1 | $4.11 | 42.357 | Launch | ||
262,144.0 |
1 | $4.74 | 69.719 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
3 | $0.88 | 2.393 | Launch | ||
262,144.0 tensor |
2 | $0.93 | 4.010 | Launch | ||
262,144.0 tensor |
4 | $0.96 | 10.091 | Launch | ||
262,144.0 pipeline |
3 | $1.06 | 2.393 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 4.010 | Launch | ||
262,144.0 tensor |
4 | $1.26 | 10.091 | Launch | ||
262,144.0 tensor |
2 | $1.56 | 4.010 | Launch | ||
262,144.0 tensor |
2 | $1.92 | 145.020 | 4.010 | Launch | |
262,144.0 tensor |
2 | $2.22 | 13.325 | Launch | ||
262,144.0 |
1 | $2.37 | 24.257 | Launch | ||
262,144.0 tensor |
2 | $2.93 | 13.325 | Launch | ||
262,144.0 |
1 | $3.83 | 134.650 | 24.257 | Launch | |
262,144.0 |
1 | $4.11 | 32.407 | Launch | ||
262,144.0 |
1 | $4.74 | 59.768 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
6 | $1.65 | 5.330 | Launch | ||
262,144.0 tensor |
4 | $1.75 | 8.564 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 8.564 | Launch | ||
262,144.0 |
1 | $2.50 | 4.101 | Launch | ||
262,144.0 tensor |
4 | $2.97 | 8.564 | Launch | ||
262,144.0 tensor |
4 | $3.68 | 8.564 | Launch | ||
262,144.0 pipeline |
3 | $3.89 | 10.181 | Launch | ||
262,144.0 |
1 | $3.95 | 4.101 | Launch | ||
262,144.0 |
1 | $4.11 | 12.251 | Launch | ||
262,144.0 pipeline |
3 | $4.34 | 10.181 | Launch | ||
262,144.0 tensor |
4 | $4.35 | 27.193 | Launch | ||
262,144.0 |
1 | $4.74 | 39.613 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 27.193 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.