gemma-4-26B-A4B-it

reasoning
multimodal
coding

Gemma‑4‑26B‑A4B‑it is Google’s first open model based on the Mixture‑of‑Experts (MoE) architecture. With a total of 25.2 billion parameters, only a small fraction — between 3.8 and 4 billion — is activated for each token. According to the developers, this efficiency allows the model to achieve approximately 97% of the quality of the dense 31B model at significantly lower computational cost. At release, the model ranks 6th on the Arena AI leaderboard among open models, outperforming competitors that are 20 times larger.

The 26B A4B model is built on 30 layers and uses hybrid attention with a sliding window of 1024 tokens, supporting a context window of 256 thousand tokens. It has multimodal capabilities, handling both text and images exceptionally well. Unlike dense alternatives, the MoE model is specifically optimised for efficient execution of agentic workflows, demonstrating significant progress over Gemma‑3. On the T2‑Bench agent benchmark, Gemma‑4 26B A4B scores 86.4%, whereas the previous generation achieved only 6.6%.

For developers, the key advantage of this model is its exceptional deployment efficiency. Community estimates indicate that the model can generate 162 tokens per second on an NVIDIA RTX 4090 accelerator and can run effectively even on memory‑constrained devices. This makes it an ideal choice for complex agentic systems, deep code analysis, and intensive reasoning tasks where a balance between performance and hardware costs is required.

For the developers’ usage recommendations for the model, please refer to this link: https://ai.google.dev/gemma/docs/core/model_card_4?hl=en


Announce Date: 11.03.2026
Parameters: 27B
Experts: 128
Activated at inference: 4B
Context: 263K
Layers: 30, using full attention: 5
Attention Type: Sliding Window Attention
Developer: Google DeepMind
Transformers Version: 5.5.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore gemma-4-26B-A4B-it capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting gemma-4-26B-A4B-it

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
262,144.0
pipeline
3 $0.88 1.819 Launch
teslaa10-2.16.64.160
262,144.0
tensor
2 $0.93 2.032 Launch
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 2.831 Launch
teslaa2-3.32.128.160
262,144.0
pipeline
3 $1.06 1.819 Launch
rtx2080ti-4.16.32.160
262,144.0
tensor
4 $1.12 1.300 Launch
teslav100-1.12.64.160
262,144.0
1 $1.20 1.020 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 2.032 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 2.831 Launch
rtx3090-2.16.64.160
262,144.0
tensor
2 $1.56 2.032 Launch
rtx5090-1.16.64.160
262,144.0
1 $1.59 1.020 Launch
rtx3080-4.16.64.160
262,144.0
tensor
4 $1.82 0.994 Launch
rtx4090-2.16.64.160
262,144.0
tensor
2 $1.92 2.032 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 4.694 Launch
h100-1.16.64.160
262,144.0
1 $3.83 4.694 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 5.765 Launch
h200-1.16.128.160
262,144.0
1 $4.74 9.363 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-2.16.64.160
262,144.0
tensor
2 $0.93 0.980 Launch
teslat4-4.16.64.160
262,144.0
tensor
4 $0.96 1.780 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 0.980 Launch
teslaa2-4.32.128.160
262,144.0
tensor
4 $1.26 1.780 Launch
rtx3090-2.16.64.160
262,144.0
tensor
2 $1.56 0.980 Launch
rtx4090-2.16.64.160
262,144.0
tensor
2 $1.92 0.980 Launch
teslav100-2.16.64.240
262,144.0
tensor
2 $2.22 2.205 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 3.642 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 2.205 Launch
h100-1.16.64.160
262,144.0
1 $3.83 3.642 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 4.714 Launch
h200-1.16.128.160
262,144.0
1 $4.74 8.312 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.64.160
262,144.0
tensor
4 $1.62 2.410 Launch
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.984 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 2.410 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 1.823 Launch
rtx3090-4.16.64.160
262,144.0
tensor
4 $2.89 2.410 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 2.410 Launch
h100-1.16.64.160
262,144.0
1 $3.83 1.823 Launch
teslav100-3.64.256.320
262,144.0
pipeline
3 $3.89 2.622 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 2.895 Launch
teslav100-4.32.64.160
262,144.0
tensor
4 $4.28 4.859 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 2.622 Launch
h200-1.16.128.160
262,144.0
1 $4.74 6.492 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 4.859 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.