gemma-4-E2B-it

reasoning
multimodal

Gemma‑4‑E2B‑it is the most compact and energy‑efficient model in the lineup, designed to operate under extremely tight resource constraints. Like the E4B version, it uses the Per‑Layer Embeddings (PLE) technique, which delivers high performance with minimal memory consumption. The model has a total of 5.1 billion parameters, but only the effective part — 2.3 billion — is active during inference. It is built on 35 layers, supports a context window of 128 thousand tokens, and uses hybrid attention with a sliding window of 512 tokens.

E2B is fully multimodal and can process not only text and images but also audio (equipped with an audio encoder of ~300M parameters). This feature set, combined with extremely low memory requirements, makes the model unique in its class. Developers emphasise that E2B is specifically designed for efficient local use on laptops and mobile devices. According to community estimates, the model can run on devices with less than 1.5 GB of RAM, including smartphones.

Despite its modest size, E2B delivers impressive results. Numerous independent community evaluations show that this model surpasses Gemma‑3 27B on some tasks, even though its effective size is 12 times smaller. Developers particularly recommend E2B for routine agentic workflows, optical character recognition (OCR) tasks, and scenarios where low latency and on‑device inference are critical. At the same time, the Apache 2.0 licence opens up broad opportunities for integrating the model into a wide variety of commercial applications.

For the developers’ usage recommendations for the model, please refer to this link - https://ai.google.dev/gemma/docs/core/model_card_4?hl=en


Announce Date: 02.03.2026
Parameters: 6B
Context: 132K
Layers: 35, using full attention: 3, using no attention: 20
Attention Type: Sliding Window Attention
Developer: Google DeepMind
Transformers Version: 5.5.0.dev0
vLLM Version: gemma4
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore gemma-4-E2B-it capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting gemma-4-E2B-it

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
131,072.0
1 $0.39 6.003 Launch
teslat4-1.16.64.160
131,072.0
1 $0.42 6.003 Launch
rtx2080ti-2.16.64.160
131,072.0
tensor
2 $0.71 9.416 Launch
rtx3080-2.16.64.160
131,072.0
tensor
2 $1.03 7.298 Launch
rtx4090-1.32.64.160
131,072.0
1 $1.18 14.477 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 36.959 Launch
rtx5090-1.32.64.160
131,072.0
1 $1.69 22.952 Launch
teslaa10-4.16.128.160
131,072.0
tensor
4 $1.75 81.921 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 73.800 Launch
rtx3090-4.16.128.160
131,072.0
tensor
4 $3.01 81.921 Launch
h100-1.16.64.160
131,072.0
1 $3.83 73.800 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 88.630 Launch
teslaa100-2.24.256.160.nvlink
131,072.0
tensor
2 $4.93 155.603 Launch
h200-2.24.256.160.nvlink
131,072.0
tensor
2 $9.40 284.841 Launch
h200-4.32.768.480
131,072.0
tensor
4 $19.23 577.685 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
131,072.0
1 $0.39 3.967 Launch
teslat4-1.16.64.160
131,072.0
1 $0.42 3.967 Launch
rtx2080ti-2.16.64.160
131,072.0
tensor
2 $0.71 7.381 Launch
rtx3080-2.16.64.160
131,072.0
tensor
2 $1.03 5.262 Launch
rtx4090-1.32.64.160
131,072.0
1 $1.18 12.442 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 34.923 Launch
rtx5090-1.32.64.160
131,072.0
1 $1.69 20.917 Launch
teslaa10-4.16.128.160
131,072.0
tensor
4 $1.75 79.886 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 71.764 Launch
rtx3090-4.16.128.160
131,072.0
tensor
4 $3.01 79.886 Launch
h100-1.16.64.160
131,072.0
1 $3.83 71.764 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 86.595 Launch
teslaa100-2.24.256.160.nvlink
131,072.0
tensor
2 $4.93 153.568 Launch
h200-2.24.256.160.nvlink
131,072.0
tensor
2 $9.40 282.805 Launch
h200-4.32.768.480
131,072.0
tensor
4 $19.23 575.650 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
131,072.0
1 $0.39 2.774 Launch
teslat4-1.16.64.160
131,072.0
1 $0.42 2.774 Launch
rtx2080ti-2.16.64.160
131,072.0
tensor
2 $0.71 6.188 Launch
rtx3080-2.16.64.160
131,072.0
tensor
2 $1.03 4.069 Launch
rtx4090-1.32.64.160
131,072.0
1 $1.18 11.249 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 33.730 Launch
rtx5090-1.32.64.160
131,072.0
1 $1.69 19.724 Launch
teslaa10-4.16.128.160
131,072.0
tensor
4 $1.75 78.693 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 70.571 Launch
rtx3090-4.16.128.160
131,072.0
tensor
4 $3.01 78.693 Launch
h100-1.16.64.160
131,072.0
1 $3.83 70.571 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 85.402 Launch
teslaa100-2.24.256.160.nvlink
131,072.0
tensor
2 $4.93 152.375 Launch
h200-2.24.256.160.nvlink
131,072.0
tensor
2 $9.40 281.612 Launch
h200-4.32.768.480
131,072.0
tensor
4 $19.23 574.457 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.