granite-4.0-h-micro

Granite-4.0-H-Micro is the most compact model in the lineup, featuring a dense (non-MoE) architecture with 3 billion parameters. It retains all the benefits of the hybrid Mamba-2/Transformer approach but uses traditional dense feed-forward layers instead of MoE blocks, simplifying deployment and reducing inference complexity. The ratio of Mamba-2 to Transformer blocks in H-Micro follows the same 9:1 principle as other hybrid models in the series. This ensures efficient processing of long sequences while maintaining the excellent context understanding characteristic of local attention. The absence of positional encoding allows the model to theoretically handle sequences of unlimited length, which is particularly valuable for applications involving long documents or extended dialogues. The dense architecture makes the model more predictable in terms of resource usage and simplifies optimization for specific hardware platforms.

Despite its compact size, H-Micro demonstrates excellent performance. On the MMLU benchmark, the model achieves 67.43%, while on IFEval, its average score is 84.32%, which is an outstanding result for a 3-billion-parameter model. In RAG (Retrieval-Augmented Generation) tasks, Granite-4.0-H-Micro scores 72 points, significantly outperforming much larger models such as Qwen3-8B (55 points) and Llama-3.3-70B (61 points).

H-Micro is ideally suited for resource-constrained scenarios, including deployment on edge devices, embedded systems, and applications with critical latency requirements. According to the release documentation, H-Micro requires only 4 GB of memory in 8-bit mode, enabling it to run even on devices with limited resources, including a Raspberry Pi with 8GB of RAM. The model is also optimized to work with various hardware accelerators, including Qualcomm's NPU. In corporate applications, H-Micro is recommended for the local processing of sensitive data where privacy requirements prevent data from being sent to external servers. The model effectively handles tasks such as document analysis, information extraction, basic classification, and short text generation, keeping all data on the local device.


Announce Date: 02.10.2025
Parameters: 3.19B
Context: 131K
Layers: 40, using full attention: 4
Attention Type: Mamba Attention
VRAM requirements: 7.2 GB using 4 bits quantization
Developer: IBM
Transformers Version: 4.56.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore granite-4.0-h-micro capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting granite-4.0-h-micro

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
131,072.0
16 16384 160 1 $0.33 Launch
rtx2080ti-1.10.16.500
131,072.0
10 16384 500 1 $0.38 Launch
teslaa2-1.16.32.160
131,072.0
16 32768 160 1 $0.38 Launch
teslaa10-1.16.32.160
131,072.0
16 32768 160 1 $0.53 Launch
rtx3080-1.16.32.160
131,072.0
16 32768 160 1 $0.57 Launch
rtx3090-1.16.24.160
131,072.0
16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160
131,072.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
131,072.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
16 65536 160 2 $1.23 Launch
rtx5090-1.16.64.160
131,072.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
131,072.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $6.98 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
131,072.0
16 16384 160 1 $0.33 Launch
rtx2080ti-1.10.16.500
131,072.0
10 16384 500 1 $0.38 Launch
teslaa2-1.16.32.160
131,072.0
16 32768 160 1 $0.38 Launch
teslaa10-1.16.32.160
131,072.0
16 32768 160 1 $0.53 Launch
rtx3080-1.16.32.160
131,072.0
16 32768 160 1 $0.57 Launch
rtx3090-1.16.24.160
131,072.0
16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160
131,072.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
131,072.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
16 65536 160 2 $1.23 Launch
rtx5090-1.16.64.160
131,072.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
131,072.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $6.98 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
131,072.0
16 16384 160 1 $0.33 Launch
rtx2080ti-1.10.16.500
131,072.0
10 16384 500 1 $0.38 Launch
teslaa2-1.16.32.160
131,072.0
16 32768 160 1 $0.38 Launch
teslaa10-1.16.32.160
131,072.0
16 32768 160 1 $0.53 Launch
rtx3080-1.16.32.160
131,072.0
16 32768 160 1 $0.57 Launch
rtx3090-1.16.24.160
131,072.0
16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160
131,072.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
131,072.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
16 65536 160 2 $1.23 Launch
rtx5090-1.16.64.160
131,072.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
131,072.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $6.98 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.