Gemma-3-12B

multimodal

Gemma 3 12B is a well-balanced mid-sized multimodal language model developed by Google DeepMind, designed to tackle narrow, specialized professional tasks. With 12 billion parameters, the model combines high performance with computational efficiency and supports a wide range of capabilities—from text analysis to image processing. Gemma 3 12B converts visual data into tokens, enabling deep understanding of images. The "Pan&Scan" technology allows adaptive processing of images with any aspect ratio, preserving detail when scaling up to a resolution of 896×896.

Another key feature is the expanded context window of up to 128K tokens. This enables the model to process lengthy legal documents and scientific articles in a single request without losing context. Multilingual support covers more than 140 languages, including Russian, while the enhanced tokenizer from Gemini 2.0 ensures high-quality translation, text generation, and cross-lingual analysis. Additionally, developer-supported quantization makes it possible to run the model even on consumer-grade GPUs with minimal loss in quality.

As a result, Gemma 3 12B is a versatile tool for data analysis, document processing, and information extraction from visual sources—with the ability to run locally and scalable integration into modern AI infrastructures.


Announce Date: 12.03.2025
Parameters: 12B
Context: 132K
Layers: 48, using full attention: 8
Attention Type: Sliding Window Attention
Developer: Google DeepMind
Transformers Version: 4.50.0.dev0
License: gemma

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Gemma-3-12B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Gemma-3-12B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
131,072.0
1 $0.53 1.446 Launch
teslat4-2.16.32.160
131,072.0
tensor
2 $0.54 1.881 Launch
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 1.881 Launch
rtx2080ti-2.12.64.160
131,072.0
tensor
2 $0.69 1.048 Launch
rtx3090-1.16.24.160
131,072.0
1 $0.83 1.446 Launch
rtx4090-1.16.32.160
131,072.0
1 $1.02 1.446 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 2.112 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 3.212 Launch
rtx3080-3.16.64.160
131,072.0
pipeline
3 $1.43 1.483 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 2.112 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 2.084 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 6.107 Launch
h100-1.16.64.160
131,072.0
1 $3.83 6.107 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 7.273 Launch
h200-1.16.128.160
131,072.0
1 $4.74 11.185 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-2.16.32.160
131,072.0
tensor
2 $0.54 1.168 Launch
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 1.168 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 2.499 Launch
rtx2080ti-3.16.64.160
131,072.0
pipeline
3 $0.95 1.020 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
4 $1.12 1.704 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 1.399 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 2.499 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 2.499 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 1.399 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 1.371 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 2.499 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 5.394 Launch
h100-1.16.64.160
131,072.0
1 $3.83 5.394 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 6.560 Launch
h200-1.16.128.160
131,072.0
1 $4.74 10.472 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
131,072.0
pipeline
3 $0.88 1.048 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 1.279 Launch
teslat4-4.16.64.160
131,072.0
tensor
4 $0.96 2.148 Launch
teslaa2-3.32.128.160
131,072.0
pipeline
3 $1.06 1.048 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 1.279 Launch
teslaa2-4.32.128.160
131,072.0
tensor
4 $1.26 2.148 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 1.279 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 1.279 Launch
teslav100-2.16.64.240
131,072.0
tensor
2 $2.22 2.611 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 4.174 Launch
rtx5090-2.16.64.160
131,072.0
tensor
2 $2.93 2.611 Launch
h100-1.16.64.160
131,072.0
1 $3.83 4.174 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 5.339 Launch
h200-1.16.128.160
131,072.0
1 $4.74 9.252 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.