gemma-3-270m

Gemma 3 270M is an innovative compact language model from Google, specifically designed for efficient execution of tasks after specialized fine-tuning. This model is part of the Gemma 3 family and inherits its core architectural details, with specific enhancements. Approximately 170 million parameters are dedicated to the operation of a large vocabulary of 262,144 tokens, while the remaining 100 million are allocated to transformer blocks utilizing sliding window attention in 15 out of 18 layers, optimizing computations for long sequences while maintaining full attention at key points. The model operates with a context window of up to 32,768 tokens and features excellent multilingual support (over 140 languages).

Technically, Gemma 3 270M is highly optimized for resource-constrained tasks. Its small size of 270 million parameters makes it ideal for deployment on edge devices, web browsers, or cloud environments where speed and low operating costs are critical. Developers note that the model was trained using Quantization-Aware Training (QAT) and supports INT4 quantization with virtually no loss in accuracy, further simplifying the task of local inference.

Unlike the larger models in the family, Gemma 3 270M is not intended for complex dialogues but is focused on specific tasks where it demonstrates exceptional efficiency. Its philosophy is "the right tool for the specific job." There is no point in using a large model if you need to perform a single, repetitive operation; in most cases, the model will need additional training to perform this specific task anyway. After fine-tuning, it will operate with remarkable precision. Gemma 3 270M is perfectly suited for creating a fleet of small, highly specialized models, each an expert in its own domain. Its primary use cases include text classification, entity extraction (e.g., from legal documents or medical records), converting unstructured text into structured formats, sentiment analysis, toxic content filtering, and request routing. Thanks to its speed, it can also be an excellent choice for applications requiring fast responses in real time.


Announce Date: 14.08.2025
Parameters: 268M
Context: 33K
Layers: 18, using full attention: 3
Attention Type: Sliding Window Attention
Developer: Google DeepMind
Transformers Version: 4.55.0.dev0
License: gemma

Public endpoint

Use our pre-built public endpoints for free to test inference and explore gemma-3-270m capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting gemma-3-270m

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
32,768.0
1 $0.33 52.789 Launch
rtx2080ti-1.10.16.500
32,768.0
1 $0.38 32.170 Launch
teslaa2-1.16.32.160
32,768.0
1 $0.38 52.789 Launch
teslaa10-1.16.32.160
32,768.0
1 $0.53 85.779 Launch
rtx3080-1.16.32.160
32,768.0
1 $0.57 28.047 Launch
rtx3090-1.16.24.160
32,768.0
1 $0.83 85.779 Launch
rtx4090-1.16.32.160
32,768.0
1 $1.02 85.779 Launch
teslav100-1.12.64.160
32,768.0
1 $1.20 118.769 Launch
rtxa5000-2.16.64.160.nvlink
32,768.0
tensor
2 $1.23 173.295 Launch
rtx5090-1.16.64.160
32,768.0
1 $1.59 118.769 Launch
teslaa100-1.16.64.160
32,768.0
1 $2.37 316.710 Launch
h100-1.16.64.160
32,768.0
1 $3.83 316.710 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 374.442 Launch
h200-1.16.128.160
32,768.0
1 $4.74 568.259 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
32,768.0
1 $0.33 53.382 Launch
rtx2080ti-1.10.16.500
32,768.0
1 $0.38 32.763 Launch
teslaa2-1.16.32.160
32,768.0
1 $0.38 53.382 Launch
teslaa10-1.16.32.160
32,768.0
1 $0.53 86.372 Launch
rtx3080-1.16.32.160
32,768.0
1 $0.57 28.639 Launch
rtx3090-1.16.24.160
32,768.0
1 $0.83 86.372 Launch
rtx4090-1.16.32.160
32,768.0
1 $1.02 86.372 Launch
teslav100-1.12.64.160
32,768.0
1 $1.20 119.362 Launch
rtxa5000-2.16.64.160.nvlink
32,768.0
tensor
2 $1.23 173.887 Launch
rtx5090-1.16.64.160
32,768.0
1 $1.59 119.362 Launch
teslaa100-1.16.64.160
32,768.0
1 $2.37 317.302 Launch
h100-1.16.64.160
32,768.0
1 $3.83 317.302 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 375.035 Launch
h200-1.16.128.160
32,768.0
1 $4.74 568.852 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-1.16.16.160
32,768.0
1 $0.33 52.127 Launch
rtx2080ti-1.10.16.500
32,768.0
1 $0.38 31.508 Launch
teslaa2-1.16.32.160
32,768.0
1 $0.38 52.127 Launch
teslaa10-1.16.32.160
32,768.0
1 $0.53 85.117 Launch
rtx3080-1.16.32.160
32,768.0
1 $0.57 27.384 Launch
rtx3090-1.16.24.160
32,768.0
1 $0.83 85.117 Launch
rtx4090-1.16.32.160
32,768.0
1 $1.02 85.117 Launch
teslav100-1.12.64.160
32,768.0
1 $1.20 118.107 Launch
rtxa5000-2.16.64.160.nvlink
32,768.0
tensor
2 $1.23 172.632 Launch
rtx5090-1.16.64.160
32,768.0
1 $1.59 118.107 Launch
teslaa100-1.16.64.160
32,768.0
1 $2.37 316.048 Launch
h100-1.16.64.160
32,768.0
1 $3.83 316.048 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 373.780 Launch
h200-1.16.128.160
32,768.0
1 $4.74 567.597 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.