Nemotron-3 Nano-30B is a new-generation LLM from NVIDIA. The model's key feature is its innovative architecture, which integrates Mamba2 layers, Transformer layers, and Mixture-of-Experts (MoE) technology into a unified compute cluster. This structure allows the model to efficiently process massive datasets while maintaining logical coherence and high throughput. The model has a total parameter count of 32 billion, but thanks to MoE routing, only an active subset of approximately 3.5 billion parameters is engaged for generating each individual token. This provides a unique balance: the model possesses the "knowledge" and capacity of a 30B-scale network but consumes computational resources on par with compact models optimized for fast inference. The model was trained on a dataset of about 25 trillion tokens, encompassing 43 programming languages and more than 19 natural languages.
Compared to Nemotron v2, the new version offers an MoE architecture instead of a dense one, delivering 4 times greater throughput. Another key capability of Nemotron-3 Nano is support for a context window of up to 1 million tokens. This expansion ideally showcases the capabilities of Mamba2 layers, which process long sequences with minimal memory overhead. A crucial stage in the model's creation was Multi-environment Reinforcement Learning using the NeMo Gym library. The model was trained not just to answer questions, but to perform action sequences: calling tools, writing functional code, and constructing multi-step plans. This makes its behavior more predictable and reliable in complex scenarios where step-by-step result verification is required.
On the AIME25 benchmark (American Invitational Mathematics Examination), which tests mathematical and quantitative reasoning, Nemotron 3 Nano achieves 99.2% accuracy with tool use, surpassing GPT-OSS-20B at 98.7%. On LiveCodeBench (v6 2025-08–2025–05), the model scores 68.2%, outperforming Qwen3-30B (66.0%) and GPT-OSS-20B (61.0%). On other benchmarks, the model either leads or is on par with its counterparts.
Given its architectural advantages and NVIDIA's recommendations, the model is ideally suited for the following tasks: Agentic Systems and Orchestration, Long-Context RAG, Local/On-Prem and Edge Computing, Code Generation, and Data Structuring.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
2 | $0.57 | 4.648 | Launch | ||
262,144.0 pipeline |
3 | $0.84 | 3.612 | Launch | ||
262,144.0 pipeline |
3 | $0.88 | 12.347 | Launch | ||
262,144.0 tensor |
4 | $1.12 | 8.400 | Launch | ||
262,144.0 |
1 | $1.18 | 1.607 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 13.964 | Launch | ||
262,144.0 tensor |
4 | $1.43 | 20.046 | Launch | ||
262,144.0 pipeline |
3 | $1.49 | 1.865 | Launch | ||
262,144.0 |
1 | $1.69 | 6.265 | Launch | ||
262,144.0 tensor |
4 | $1.75 | 38.679 | Launch | ||
262,144.0 tensor |
4 | $1.88 | 6.071 | Launch | ||
262,144.0 |
1 | $2.37 | 34.215 | Launch | ||
262,144.0 tensor |
4 | $3.01 | 38.679 | Launch | ||
262,144.0 |
1 | $3.83 | 34.215 | Launch | ||
262,144.0 |
1 | $4.11 | 42.367 | Launch | ||
262,144.0 tensor |
2 | $4.93 | 79.181 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 150.220 | Launch | ||
262,144.0 tensor |
4 | $19.23 | 311.190 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
3 | $0.88 | 2.394 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 4.011 | Launch | ||
262,144.0 tensor |
4 | $1.29 | 10.093 | Launch | ||
262,144.0 pipeline |
3 | $1.31 | 2.394 | Launch | ||
262,144.0 tensor |
4 | $1.43 | 10.093 | Launch | ||
262,144.0 tensor |
4 | $1.75 | 28.726 | Launch | ||
262,144.0 tensor |
2 | $1.92 | 4.011 | Launch | ||
262,144.0 |
1 | $2.37 | 24.262 | Launch | ||
262,144.0 tensor |
2 | $2.93 | 13.328 | Launch | ||
262,144.0 tensor |
4 | $3.01 | 28.726 | Launch | ||
262,144.0 |
1 | $3.83 | 134.650 | 24.262 | Launch | |
262,144.0 |
1 | $4.11 | 32.414 | Launch | ||
262,144.0 tensor |
2 | $4.93 | 69.228 | Launch | ||
262,144.0 tensor |
2 | $9.40 | 140.267 | Launch | ||
262,144.0 tensor |
4 | $19.23 | 301.238 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
6 | $1.69 | 5.331 | Launch | ||
262,144.0 tensor |
4 | $1.76 | 8.566 | Launch | ||
262,144.0 |
1 | $2.51 | 4.102 | Launch | ||
262,144.0 tensor |
4 | $2.97 | 8.566 | Launch | ||
262,144.0 tensor |
4 | $3.68 | 8.566 | Launch | ||
262,144.0 |
1 | $3.96 | 4.102 | Launch | ||
262,144.0 |
1 | $4.12 | 12.254 | Launch | ||
262,144.0 pipeline |
3 | $4.35 | 10.184 | Launch | ||
262,144.0 tensor |
2 | $4.94 | 49.068 | Launch | ||
262,144.0 tensor |
4 | $5.76 | 27.199 | Launch | ||
262,144.0 tensor |
2 | $9.41 | 120.107 | Launch | ||
262,144.0 tensor |
4 | $19.23 | 281.077 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.