gemma-4-31B-it

reasoning
multimodal
coding

Gemma‑4‑31B‑it is the flagship dense model of the entire lineup, which at the time of its release sets a new standard for quality and performance in the class of open models of comparable size. With 30.7 billion parameters, a deep architecture of 60 layers, and a 256‑thousand‑token context window, this model demonstrates high‑quality reasoning generation and programming skills, only slightly trailing the largest closed‑source and open‑source “giants.” Its architecture is based on a hybrid attention mechanism that alternates a local sliding window of 1024 tokens with full global layers to preserve context. To handle long sequences, the global layers use unified keys and values as well as proportional rotary position encoding (Proportional RoPE). These optimisations reduce KV cache requirements by up to 74% compared to traditional full‑attention mechanisms. The model is multimodal out‑of‑the‑box, processing text and images through a powerful vision encoder with ~550 million parameters.

According to the official developer blog, at launch Gemma‑4‑31B‑it ranks 3rd on the global Arena AI text leaderboard for open models and confidently outperforms models that are 20 times larger. The model achieves excellent results across a range of key benchmarks. On MMLU‑Pro it scores 85.2%, a significant improvement over Gemma‑3 27B’s 67.6%. The progress in reasoning and programming is even more striking: on LiveCodeBench the result nearly tripled — from 29.1% to 80.0%, and on the challenging mathematics test AIME 2026 the model achieves 89.2% versus 20.8% for its predecessor. On multimodal understanding tasks, the model also shows strong results: MMMU — 76.9%, MATH‑Vision — 85.6%.

Developers recommend the 31B model for scenarios that require high, proven generation quality and deep logical analysis, provided sufficient computational resources are available. Thanks to the Apache 2.0 licence, the model can be freely fine‑tuned and used in commercial products. The unquantised bfloat16 version fits on a single NVIDIA H100 with 80 GB of memory, while quantised variants can run efficiently on consumer GPUs, opening up opportunities for local deployment of powerful agentic systems and assistants.

For the developers’ usage recommendations for the model, please refer to this link: https://ai.google.dev/gemma/docs/core/model_card_4?hl=en


Announce Date: 11.03.2026
Parameters: 33B
Context: 263K
Layers: 60, using full attention: 10
Attention Type: Sliding Window Attention
Developer: Google DeepMind
Transformers Version: 5.5.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore gemma-4-31B-it capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting gemma-4-31B-it

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
262,144.0
pipeline
6 $1.65 1.138 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 1.245 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.245 Launch
teslaa100-1.16.128.160
262,144.0
1 $2.50 1.098 Launch
rtx3090-4.16.96.320
262,144.0
tensor
4 $2.97 1.245 Launch
rtx4090-4.16.96.320
262,144.0
tensor
4 $3.68 1.245 Launch
teslav100-3.64.256.320
262,144.0
pipeline
3 $3.89 1.298 Launch
h100-1.16.128.160
262,144.0
1 $3.95 1.098 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.366 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.298 Launch
teslav100-4.32.96.160
262,144.0
tensor
4 $4.35 1.857 Launch
h200-1.16.128.160
262,144.0
1 $4.74 2.265 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.857 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
262,144.0
pipeline
6 $3.50 1.778 Launch
teslav100-3.64.256.320
262,144.0
pipeline
3 $3.89 1.019 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.087 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.019 Launch
teslav100-4.32.96.160
262,144.0
tensor
4 $4.35 1.578 Launch
teslaa100-2.24.96.160.nvlink
262,144.0
tensor
2 $4.61 2.297 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 2.590 Launch
h200-1.16.128.160
262,144.0
1 $4.74 1.986 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 1.578 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
6 $5.83 1.778 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 2.590 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 2.297 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
262,144.0
pipeline
6 $3.50 1.198 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 2.010 Launch
teslav100-4.32.256.160
262,144.0
tensor
4 $4.66 0.998 Launch
teslaa100-2.24.128.160.nvlink
262,144.0
tensor
2 $4.67 1.717 Launch
h200-1.16.128.160
262,144.0
1 $4.74 1.407 Launch
rtx5090-4.16.128.160
262,144.0
tensor
4 $5.74 0.998 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
6 $5.83 1.198 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 2.010 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 1.717 Launch
h100nvl-2.24.192.240
262,144.0
tensor
2 $8.17 2.253 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.