gemma-4-26B-A4B-it

reasoning
multimodal
coding

Gemma‑4‑26B‑A4B‑it is Google’s first open model based on the Mixture‑of‑Experts (MoE) architecture. With a total of 25.2 billion parameters, only a small fraction — between 3.8 and 4 billion — is activated for each token. According to the developers, this efficiency allows the model to achieve approximately 97% of the quality of the dense 31B model at significantly lower computational cost. At release, the model ranks 6th on the Arena AI leaderboard among open models, outperforming competitors that are 20 times larger.

The 26B A4B model is built on 30 layers and uses hybrid attention with a sliding window of 1024 tokens, supporting a context window of 256 thousand tokens. It has multimodal capabilities, handling both text and images exceptionally well. Unlike dense alternatives, the MoE model is specifically optimised for efficient execution of agentic workflows, demonstrating significant progress over Gemma‑3. 

For developers, the key advantage of this model is its exceptional deployment efficiency. Community estimates indicate that the model can generate 162 tokens per second on an NVIDIA RTX 4090 accelerator and can run effectively even on memory‑constrained devices. This makes it an ideal choice for complex agentic systems, deep code analysis, and intensive reasoning tasks where a balance between performance and hardware costs is required.

For the developers’ usage recommendations for the model, please refer to this link: https://ai.google.dev/gemma/docs/core/model_card_4?hl=en


Announce Date: 11.03.2026
Parameters: 27B
Experts: 128
Activated at inference: 4B
Context: 263K
Layers: 30, using full attention: 5
Attention Type: Sliding Window Attention
Developer: Google DeepMind
Transformers Version: 5.5.0.dev0
vLLM Version: gemma4
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore gemma-4-26B-A4B-it capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting gemma-4-26B-A4B-it

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-2.16.32.160
262,144.0
tensor
2 $0.57 1.404 Launch
rtx2080ti-3.12.24.120
262,144.0
pipeline
3 $0.84 1.167 Launch
teslat4-3.32.64.200
262,144.0
pipeline
3 $0.88 3.165 Launch
rtx2080ti-4.16.32.160
262,144.0
tensor
4 $1.12 2.262 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 3.535 Launch
teslat4-4.48.192.320
262,144.0
tensor
4 $1.43 4.926 Launch
rtx5090-1.32.64.160
262,144.0
1 $1.69 1.774 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 9.188 Launch
rtx3080-4.16.96.160
262,144.0
tensor
4 $1.88 1.730 Launch
rtx4090-2.16.64.160
262,144.0
tensor
2 $1.92 3.535 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 8.167 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 9.188 Launch
h100-1.16.64.160
262,144.0
1 $3.83 8.167 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 10.031 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 18.451 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 34.700 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 71.517 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.200
262,144.0
pipeline
3 $0.88 1.336 Launch
rtxa5000-2.16.64.160.nvlink
262,144.0
tensor
2 $1.23 1.706 Launch
teslaa2-4.32.128.480
262,144.0
tensor
4 $1.29 3.097 Launch
teslaa2-3.32.256.160
262,144.0
pipeline
3 $1.31 1.336 Launch
teslat4-4.48.192.320
262,144.0
tensor
4 $1.43 3.097 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 7.358 Launch
rtx4090-2.16.64.160
262,144.0
tensor
2 $1.92 1.706 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 6.337 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 3.836 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 7.358 Launch
h100-1.16.64.160
262,144.0
1 $3.83 6.337 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 8.202 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 16.622 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 32.870 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 69.688 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.480
262,144.0
pipeline
6 $1.69 3.453 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 4.193 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 4.193 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 3.172 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 1.366 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 4.193 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 4.193 Launch
h100-1.16.64.160
262,144.0
1 $3.83 3.172 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 5.036 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 4.563 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 13.456 Launch
rtx5090-4.32.128.160
262,144.0
tensor
4 $5.84 8.455 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 29.705 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 66.522 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.