GigaChat3.1-702B-A36B

GigaChat3.1‑702B‑A36B, also referred to as GigaChat 3.1 Ultra, is built on a Mixture‑of‑Experts architecture that distributes 702 billion parameters across multiple expert sub‑networks while activating only 36 billion at each forward step. Two architectural innovations take its throughput to a qualitatively new level. The Multi‑head Latent Attention mechanism compresses keys and values into a low‑dimensional latent space, which slashes the size of the KV‑cache several times over and eliminates long‑context bottlenecks. At the same time, Multi‑Token Prediction trains the model to predict several subsequent tokens in a single pass. In production systems this enables speculative or parallel decoding: the main model generates a draft sequence, and the MTP heads rapidly verify it. The result is a 38–40% speedup in generation with no loss in quality — critical for services serving thousands of concurrent users.

The key difference between version 3.1 and the preliminary release is a full DPO training stage conducted natively in 8‑bit floating point. Unlike conventional post‑training quantization, the model was trained directly in FP8, eliminating the accumulation of quantization errors. As a result, memory consumption was halved, and on several tasks the quality even surpassed that of the reference BF16 variant. Efficient matrix operations were powered by the DeepGEMM library in combination with optimized CUDA and Triton kernels, allowing the architecture to be flexibly designed for full 8‑bit inference compatibility.

The training corpus spanned ten languages – from English and Russian to Chinese, Arabic, Uzbek and Kazakh – and included books, academic papers, large code repositories and mathematical datasets. The entire corpus underwent multi‑stage cleaning: deduplication, language filtering, and automated quality control using heuristics and classifiers. Synthetic data played a particularly important role, amounting to roughly 5.5 trillion tokens.

On benchmarks, GigaChat 3.1 Ultra demonstrates strong results in the class of open MoE models, confidently competing with DeepSeek‑V3‑0324 and Qwen3‑235B‑A22B, and holds leading positions on tests related to Russian‑domain knowledge.

The technical capabilities directly define the model’s deployment scenarios. Support for a context window of 262,144 tokens and the compressed MLA cache makes it an ideal core for enterprise RAG systems and intelligent chatbots processing many thousands of pages of documentation, reports and knowledge bases. The model is trained for multi‑step agentic dialogues with executable tool calls, making it a ready‑made “brain” for autonomous systems — from voice assistants to integrations with corporate APIs. Thanks to native FP8 support and compatibility with vLLM, SGLang and other inference engines, deployment is possible on private clusters with full control over data — critically important for privacy‑sensitive sectors such as healthcare, finance and the public sector.


Announce Date: 21.03.2026
Parameters: 716B
Experts: 256
Activated at inference: 36B
Context: 263K
Layers: 64
Attention Type: Multi-head Latent Attention
Developer: Sber AI
Transformers Version: 4.53.2
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GigaChat3.1-702B-A36B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting GigaChat3.1-702B-A36B

Prices:
Name GPU Price, hour TPS Max Concurrency
dedicated-h100-8.96.768.5760-1
262,144.0
tensor
8 1.537 Launch
teslaa100-8.44.512.480.nvlink
262,144.0
tensor
8 $18.35 1.537 Launch
h200-4.32.768.480
262,144.0
tensor
4 $19.23 2.268 Launch
h200-4.32.768.480.nvlink
262,144.0
tensor
4 $19.23 2.268 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
h200-8.52.1024.960
262,144.0
tensor
8 $37.37 2.242 Launch
h200-8.52.1024.960.nvlink
262,144.0
tensor
8 $37.37 2.242 Launch
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.