gemma-3-270m

Gemma 3 270M is an innovative compact language model from Google, specifically designed for efficient execution of tasks after specialized fine-tuning. This model is part of the Gemma 3 family and inherits its core architectural details, with specific enhancements. Approximately 170 million parameters are dedicated to the operation of a large vocabulary of 262,144 tokens, while the remaining 100 million are allocated to transformer blocks utilizing sliding window attention in 15 out of 18 layers, optimizing computations for long sequences while maintaining full attention at key points. The model operates with a context window of up to 32,768 tokens and features excellent multilingual support (over 140 languages).

Technically, Gemma 3 270M is highly optimized for resource-constrained tasks. Its small size of 270 million parameters makes it ideal for deployment on edge devices, web browsers, or cloud environments where speed and low operating costs are critical. Developers note that the model was trained using Quantization-Aware Training (QAT) and supports INT4 quantization with virtually no loss in accuracy, further simplifying the task of local inference.

Unlike the larger models in the family, Gemma 3 270M is not intended for complex dialogues but is focused on specific tasks where it demonstrates exceptional efficiency. Its philosophy is "the right tool for the specific job." There is no point in using a large model if you need to perform a single, repetitive operation; in most cases, the model will need additional training to perform this specific task anyway. After fine-tuning, it will operate with remarkable precision. Gemma 3 270M is perfectly suited for creating a fleet of small, highly specialized models, each an expert in its own domain. Its primary use cases include text classification, entity extraction (e.g., from legal documents or medical records), converting unstructured text into structured formats, sentiment analysis, toxic content filtering, and request routing. Thanks to its speed, it can also be an excellent choice for applications requiring fast responses in real time.


Announce Date: 14.08.2025
Parameters: 0.268B
Context: 32K
Attention Type: Sliding Window Attention
VRAM requirements: 0.3 GB using 4 bits quantization
Developer: Google DeepMind
Transformers Version: 4.55.0.dev0
License: gemma

Public endpoint

Use our pre-built public endpoints to test inference and explore gemma-3-270m capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting gemma-3-270m

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx2080ti-1.16.32.160 16 32768 160 1 $0.41 Launch
teslat4-1.16.16.160 16 16384 160 1 $0.46 Launch
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx2080ti-1.16.32.160 16 32768 160 1 $0.41 Launch
teslat4-1.16.16.160 16 16384 160 1 $0.46 Launch
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
rtx2080ti-1.16.32.160 16 32768 160 1 $0.41 Launch
teslat4-1.16.16.160 16 16384 160 1 $0.46 Launch
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.