Gemma‑4‑26B‑A4B‑it is Google’s first open model based on the Mixture‑of‑Experts (MoE) architecture. With a total of 25.2 billion parameters, only a small fraction — between 3.8 and 4 billion — is activated for each token. According to the developers, this efficiency allows the model to achieve approximately 97% of the quality of the dense 31B model at significantly lower computational cost. At release, the model ranks 6th on the Arena AI leaderboard among open models, outperforming competitors that are 20 times larger.
The 26B A4B model is built on 30 layers and uses hybrid attention with a sliding window of 1024 tokens, supporting a context window of 256 thousand tokens. It has multimodal capabilities, handling both text and images exceptionally well. Unlike dense alternatives, the MoE model is specifically optimised for efficient execution of agentic workflows, demonstrating significant progress over Gemma‑3. On the T2‑Bench agent benchmark, Gemma‑4 26B A4B scores 86.4%, whereas the previous generation achieved only 6.6%.
For developers, the key advantage of this model is its exceptional deployment efficiency. Community estimates indicate that the model can generate 162 tokens per second on an NVIDIA RTX 4090 accelerator and can run effectively even on memory‑constrained devices. This makes it an ideal choice for complex agentic systems, deep code analysis, and intensive reasoning tasks where a balance between performance and hardware costs is required.
For the developers’ usage recommendations for the model, please refer to this link: https://ai.google.dev/gemma/docs/core/model_card_4?hl=en
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 pipeline |
3 | $0.88 | 1.819 | Launch | ||
262,144.0 tensor |
2 | $0.93 | 2.032 | Launch | ||
262,144.0 tensor |
4 | $0.96 | 2.831 | Launch | ||
262,144.0 pipeline |
3 | $1.06 | 1.819 | Launch | ||
262,144.0 tensor |
4 | $1.12 | 1.300 | Launch | ||
262,144.0 |
1 | $1.20 | 1.020 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 2.032 | Launch | ||
262,144.0 tensor |
4 | $1.26 | 2.831 | Launch | ||
262,144.0 tensor |
2 | $1.56 | 2.032 | Launch | ||
262,144.0 |
1 | $1.59 | 1.020 | Launch | ||
262,144.0 tensor |
4 | $1.82 | 0.994 | Launch | ||
262,144.0 tensor |
2 | $1.92 | 2.032 | Launch | ||
262,144.0 |
1 | $2.37 | 4.694 | Launch | ||
262,144.0 |
1 | $3.83 | 4.694 | Launch | ||
262,144.0 |
1 | $4.11 | 5.765 | Launch | ||
262,144.0 |
1 | $4.74 | 9.363 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
2 | $0.93 | 0.980 | Launch | ||
262,144.0 tensor |
4 | $0.96 | 1.780 | Launch | ||
262,144.0 tensor |
2 | $1.23 | 0.980 | Launch | ||
262,144.0 tensor |
4 | $1.26 | 1.780 | Launch | ||
262,144.0 tensor |
2 | $1.56 | 0.980 | Launch | ||
262,144.0 tensor |
2 | $1.92 | 0.980 | Launch | ||
262,144.0 tensor |
2 | $2.22 | 2.205 | Launch | ||
262,144.0 |
1 | $2.37 | 3.642 | Launch | ||
262,144.0 tensor |
2 | $2.93 | 2.205 | Launch | ||
262,144.0 |
1 | $3.83 | 3.642 | Launch | ||
262,144.0 |
1 | $4.11 | 4.714 | Launch | ||
262,144.0 |
1 | $4.74 | 8.312 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
4 | $1.62 | 2.410 | Launch | ||
262,144.0 pipeline |
6 | $1.65 | 1.984 | Launch | ||
262,144.0 tensor |
4 | $2.34 | 2.410 | Launch | ||
262,144.0 |
1 | $2.37 | 1.823 | Launch | ||
262,144.0 tensor |
4 | $2.89 | 2.410 | Launch | ||
262,144.0 tensor |
4 | $3.60 | 2.410 | Launch | ||
262,144.0 |
1 | $3.83 | 1.823 | Launch | ||
262,144.0 pipeline |
3 | $3.89 | 2.622 | Launch | ||
262,144.0 |
1 | $4.11 | 2.895 | Launch | ||
262,144.0 tensor |
4 | $4.28 | 4.859 | Launch | ||
262,144.0 pipeline |
3 | $4.34 | 2.622 | Launch | ||
262,144.0 |
1 | $4.74 | 6.492 | Launch | ||
262,144.0 tensor |
4 | $5.74 | 4.859 | Launch | ||
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.