granite-4.0-h-micro

Granite-4.0-H-Micro is the most compact model in the lineup, featuring a dense (non-MoE) architecture with 3 billion parameters. It retains all the benefits of the hybrid Mamba-2/Transformer approach but uses traditional dense feed-forward layers instead of MoE blocks, simplifying deployment and reducing inference complexity. The ratio of Mamba-2 to Transformer blocks in H-Micro follows the same 9:1 principle as other hybrid models in the series. This ensures efficient processing of long sequences while maintaining the excellent context understanding characteristic of local attention. The absence of positional encoding allows the model to theoretically handle sequences of unlimited length, which is particularly valuable for applications involving long documents or extended dialogues. The dense architecture makes the model more predictable in terms of resource usage and simplifies optimization for specific hardware platforms.

Despite its compact size, H-Micro demonstrates excellent performance. On the MMLU benchmark, the model achieves 67.43%, while on IFEval, its average score is 84.32%, which is an outstanding result for a 3-billion-parameter model. In RAG (Retrieval-Augmented Generation) tasks, Granite-4.0-H-Micro scores 72 points, significantly outperforming much larger models such as Qwen3-8B (55 points) and Llama-3.3-70B (61 points).

H-Micro is ideally suited for resource-constrained scenarios, including deployment on edge devices, embedded systems, and applications with critical latency requirements. According to the release documentation, H-Micro requires only 4 GB of memory in 8-bit mode, enabling it to run even on devices with limited resources, including a Raspberry Pi with 8GB of RAM. The model is also optimized to work with various hardware accelerators, including Qualcomm's NPU. In corporate applications, H-Micro is recommended for the local processing of sensitive data where privacy requirements prevent data from being sent to external servers. The model effectively handles tasks such as document analysis, information extraction, basic classification, and short text generation, keeping all data on the local device.


Announce Date: 02.10.2025
Parameters: 4B
Context: 132K
Layers: 40, using full attention: 4
Attention Type: Mamba Attention
Developer: IBM
Transformers Version: 4.56.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore granite-4.0-h-micro capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting granite-4.0-h-micro

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
131,072.0
1 $0.33 7.981 Launch
rtx2080ti-1.10.16.500
131,072.0
1 $0.38 3.638 Launch
teslaa2-1.16.32.160
131,072.0
1 $0.38 7.981 Launch
teslaa10-1.16.32.160
131,072.0
1 $0.53 14.929 Launch
rtx3080-1.16.32.160
131,072.0
1 $0.57 2.770 Launch
rtx3090-1.16.24.160
131,072.0
1 $0.83 14.929 Launch
rtx4090-1.16.32.160
131,072.0
1 $1.02 14.929 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 21.877 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 33.361 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 21.877 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 63.565 Launch
h100-1.16.64.160
131,072.0
1 $3.83 63.565 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 75.725 Launch
h200-1.16.128.160
131,072.0
1 $4.74 116.545 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
131,072.0
1 $0.33 7.624 Launch
rtx2080ti-1.10.16.500
131,072.0
1 $0.38 3.281 Launch
teslaa2-1.16.32.160
131,072.0
1 $0.38 32.700 7.624 Launch
teslaa10-1.16.32.160
131,072.0
1 $0.53 74.940 14.572 Launch
rtx3080-1.16.32.160
131,072.0
1 $0.57 2.413 Launch
rtx3090-1.16.24.160
131,072.0
1 $0.83 109.070 14.572 Launch
rtx4090-1.16.32.160
131,072.0
1 $1.02 122.000 14.572 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 21.520 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 33.003 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 177.930 21.520 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 118.560 63.208 Launch
h100-1.16.64.160
131,072.0
1 $3.83 118.320 63.208 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 164.880 75.368 Launch
h200-1.16.128.160
131,072.0
1 $4.74 116.188 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
131,072.0
1 $0.33 5.327 Launch
rtx2080ti-1.10.16.500
131,072.0
1 $0.38 0.984 Launch
teslaa2-1.16.32.160
131,072.0
1 $0.38 21.040 5.327 Launch
teslaa10-1.16.32.160
131,072.0
1 $0.53 48.880 12.275 Launch
rtx3080-1.16.32.160
131,072.0
1 $0.57 0.116 Launch
rtx3090-1.16.24.160
131,072.0
1 $0.83 70.050 12.275 Launch
rtx4090-1.16.32.160
131,072.0
1 $1.02 82.000 12.275 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 19.223 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 30.707 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 131.620 19.223 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 95.560 60.912 Launch
h100-1.16.64.160
131,072.0
1 $3.83 107.180 60.912 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 158.220 73.071 Launch
h200-1.16.128.160
131,072.0
1 $4.74 113.891 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.