gemma-4-31B-it

reasoning
multimodal
coding

Gemma‑4‑31B‑it is the flagship dense model of the entire lineup, which at the time of its release sets a new standard for quality and performance in the class of open models of comparable size. With 30.7 billion parameters, a deep architecture of 60 layers, and a 256‑thousand‑token context window, this model demonstrates high‑quality reasoning generation and programming skills, only slightly trailing the largest closed‑source and open‑source “giants.” Its architecture is based on a hybrid attention mechanism that alternates a local sliding window of 1024 tokens with full global layers to preserve context. To handle long sequences, the global layers use unified keys and values as well as proportional rotary position encoding (Proportional RoPE). These optimisations reduce KV cache requirements by up to 74% compared to traditional full‑attention mechanisms. The model is multimodal out‑of‑the‑box, processing text and images through a powerful vision encoder with ~550 million parameters.

According to the official developer blog, at launch Gemma‑4‑31B‑it ranks 3rd on the global Arena AI text leaderboard for open models and confidently outperforms models that are 20 times larger. The model achieves excellent results across a range of key benchmarks. On MMLU‑Pro it scores 85.2%, a significant improvement over Gemma‑3 27B’s 67.6%. The progress in reasoning and programming is even more striking: on LiveCodeBench the result nearly tripled — from 29.1% to 80.0%, and on the challenging mathematics test AIME 2026 the model achieves 89.2% versus 20.8% for its predecessor. On multimodal understanding tasks, the model also shows strong results: MMMU — 76.9%, MATH‑Vision — 85.6%.

Developers recommend the 31B model for scenarios that require high, proven generation quality and deep logical analysis, provided sufficient computational resources are available. Thanks to the Apache 2.0 licence, the model can be freely fine‑tuned and used in commercial products. The unquantised bfloat16 version fits on a single NVIDIA H100 with 80 GB of memory, while quantised variants can run efficiently on consumer GPUs, opening up opportunities for local deployment of powerful agentic systems and assistants.

For the developers’ usage recommendations for the model, please refer to this link: https://ai.google.dev/gemma/docs/core/model_card_4?hl=en


Announce Date: 11.03.2026
Parameters: 33B
Context: 263K
Layers: 60, using full attention: 10
Attention Type: Sliding Window Attention
Developer: Google DeepMind
Transformers Version: 5.5.0.dev0
vLLM Version: gemma4
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore gemma-4-31B-it capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting gemma-4-31B-it

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-4.32.128.480
262,144.0
tensor
4 $1.29 1.100 Launch
teslat4-4.48.192.320
262,144.0
tensor
4 $1.43 1.100 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 2.165 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 2.165 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 1.910 Launch
rtx4090-3.16.96.160
262,144.0
pipeline
3 $2.83 1.459 Launch
rtx5090-2.16.64.160
262,144.0
tensor
2 $2.93 1.285 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 2.165 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 2.165 Launch
h100-1.16.64.160
262,144.0
1 $3.83 1.910 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 2.376 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 4.481 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 8.543 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 17.748 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.480
262,144.0
pipeline
6 $1.69 1.495 Launch
teslaa10-4.16.128.160
262,144.0
tensor
4 $1.75 1.680 Launch
rtxa5000-4.16.128.160.nvlink
262,144.0
tensor
4 $2.34 1.680 Launch
teslaa100-1.16.64.160
262,144.0
1 $2.37 1.425 Launch
rtx3090-4.16.128.160
262,144.0
tensor
4 $3.01 1.680 Launch
rtx4090-4.16.64.160
262,144.0
tensor
4 $3.60 1.680 Launch
h100-1.16.64.160
262,144.0
1 $3.83 1.425 Launch
h100nvl-1.16.96.160
262,144.0
1 $4.11 1.891 Launch
rtx5090-3.16.96.160
262,144.0
pipeline
3 $4.34 1.773 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 3.996 Launch
rtx5090-4.32.128.160
262,144.0
tensor
4 $5.84 2.746 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 8.058 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 17.263 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.256.160.nvlink
262,144.0
pipeline
6 $3.63 2.085 Launch
rtxa5000-8.24.256.160.nvlink
262,144.0
tensor
8 $4.61 3.498 Launch
teslaa100-2.24.256.160
262,144.0
tensor
2 $4.93 2.987 Launch
teslaa100-2.24.256.160.nvlink
262,144.0
tensor
2 $4.93 2.987 Launch
rtx4090-6.44.256.160
262,144.0
pipeline
6 $5.83 2.085 Launch
rtx5090-4.32.128.160
262,144.0
tensor
4 $5.84 1.737 Launch
rtx4090-8.44.256.160
262,144.0
tensor
8 $7.51 3.498 Launch
h100-2.24.256.160
262,144.0
tensor
2 $7.84 2.987 Launch
h200-2.24.256.160.nvlink
262,144.0
tensor
2 $9.40 7.049 Launch
h100nvl-3.24.384.960
262,144.0
pipeline
3 $12.43 6.957 Launch
h100nvl-4.32.384.480
262,144.0
tensor
4 $16.23 9.994 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 16.254 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.