Granite-4.0-H-Micro is the most compact model in the lineup, featuring a dense (non-MoE) architecture with 3 billion parameters. It retains all the benefits of the hybrid Mamba-2/Transformer approach but uses traditional dense feed-forward layers instead of MoE blocks, simplifying deployment and reducing inference complexity. The ratio of Mamba-2 to Transformer blocks in H-Micro follows the same 9:1 principle as other hybrid models in the series. This ensures efficient processing of long sequences while maintaining the excellent context understanding characteristic of local attention. The absence of positional encoding allows the model to theoretically handle sequences of unlimited length, which is particularly valuable for applications involving long documents or extended dialogues. The dense architecture makes the model more predictable in terms of resource usage and simplifies optimization for specific hardware platforms.
Despite its compact size, H-Micro demonstrates excellent performance. On the MMLU benchmark, the model achieves 67.43%, while on IFEval, its average score is 84.32%, which is an outstanding result for a 3-billion-parameter model. In RAG (Retrieval-Augmented Generation) tasks, Granite-4.0-H-Micro scores 72 points, significantly outperforming much larger models such as Qwen3-8B (55 points) and Llama-3.3-70B (61 points).
H-Micro is ideally suited for resource-constrained scenarios, including deployment on edge devices, embedded systems, and applications with critical latency requirements. According to the release documentation, H-Micro requires only 4 GB of memory in 8-bit mode, enabling it to run even on devices with limited resources, including a Raspberry Pi with 8GB of RAM. The model is also optimized to work with various hardware accelerators, including Qualcomm's NPU. In corporate applications, H-Micro is recommended for the local processing of sensitive data where privacy requirements prevent data from being sent to external servers. The model effectively handles tasks such as document analysis, information extraction, basic classification, and short text generation, keeping all data on the local device.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
131,072.0 |
16 | 16384 | 160 | 1 | $0.33 | Launch | |
131,072.0 |
10 | 16384 | 500 | 1 | $0.38 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.38 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.53 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.57 | Launch | |
131,072.0 |
16 | 24576 | 160 | 1 | $0.88 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $1.15 | Launch | |
131,072.0 |
12 | 65536 | 160 | 1 | $1.20 | Launch | |
131,072.0 |
16 | 65536 | 160 | 2 | $1.23 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $1.59 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $2.58 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $5.11 | Launch | |
131,072.0 |
16 | 131072 | 160 | 1 | $6.98 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
131,072.0 |
16 | 16384 | 160 | 1 | $0.33 | Launch | |
131,072.0 |
10 | 16384 | 500 | 1 | $0.38 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.38 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.53 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.57 | Launch | |
131,072.0 |
16 | 24576 | 160 | 1 | $0.88 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $1.15 | Launch | |
131,072.0 |
12 | 65536 | 160 | 1 | $1.20 | Launch | |
131,072.0 |
16 | 65536 | 160 | 2 | $1.23 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $1.59 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $2.58 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $5.11 | Launch | |
131,072.0 |
16 | 131072 | 160 | 1 | $6.98 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
131,072.0 |
16 | 16384 | 160 | 1 | $0.33 | Launch | |
131,072.0 |
10 | 16384 | 500 | 1 | $0.38 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.38 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.53 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $0.57 | Launch | |
131,072.0 |
16 | 24576 | 160 | 1 | $0.88 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $1.15 | Launch | |
131,072.0 |
12 | 65536 | 160 | 1 | $1.20 | Launch | |
131,072.0 |
16 | 65536 | 160 | 2 | $1.23 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $1.59 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $2.58 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $5.11 | Launch | |
131,072.0 |
16 | 131072 | 160 | 1 | $6.98 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.