GigaChat3.1‑702B‑A36B, also referred to as GigaChat 3.1 Ultra, is built on a Mixture‑of‑Experts architecture that distributes 702 billion parameters across multiple expert sub‑networks while activating only 36 billion at each forward step. Two architectural innovations take its throughput to a qualitatively new level. The Multi‑head Latent Attention mechanism compresses keys and values into a low‑dimensional latent space, which slashes the size of the KV‑cache several times over and eliminates long‑context bottlenecks. At the same time, Multi‑Token Prediction trains the model to predict several subsequent tokens in a single pass. In production systems this enables speculative or parallel decoding: the main model generates a draft sequence, and the MTP heads rapidly verify it. The result is a 38–40% speedup in generation with no loss in quality — critical for services serving thousands of concurrent users.
The key difference between version 3.1 and the preliminary release is a full DPO training stage conducted natively in 8‑bit floating point. Unlike conventional post‑training quantization, the model was trained directly in FP8, eliminating the accumulation of quantization errors. As a result, memory consumption was halved, and on several tasks the quality even surpassed that of the reference BF16 variant. Efficient matrix operations were powered by the DeepGEMM library in combination with optimized CUDA and Triton kernels, allowing the architecture to be flexibly designed for full 8‑bit inference compatibility.
The training corpus spanned ten languages – from English and Russian to Chinese, Arabic, Uzbek and Kazakh – and included books, academic papers, large code repositories and mathematical datasets. The entire corpus underwent multi‑stage cleaning: deduplication, language filtering, and automated quality control using heuristics and classifiers. Synthetic data played a particularly important role, amounting to roughly 5.5 trillion tokens.
On benchmarks, GigaChat 3.1 Ultra demonstrates strong results in the class of open MoE models, confidently competing with DeepSeek‑V3‑0324 and Qwen3‑235B‑A22B, and holds leading positions on tests related to Russian‑domain knowledge.
The technical capabilities directly define the model’s deployment scenarios. Support for a context window of 262,144 tokens and the compressed MLA cache makes it an ideal core for enterprise RAG systems and intelligent chatbots processing many thousands of pages of documentation, reports and knowledge bases. The model is trained for multi‑step agentic dialogues with executable tool calls, making it a ready‑made “brain” for autonomous systems — from voice assistants to integrations with corporate APIs. Thanks to native FP8 support and compatibility with vLLM, SGLang and other inference engines, deployment is possible on private clusters with full control over data — critically important for privacy‑sensitive sectors such as healthcare, finance and the public sector.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
8 | 1.537 | Launch | |||
262,144.0 tensor |
8 | $18.35 | 1.537 | Launch | ||
262,144.0 tensor |
4 | $19.23 | 2.268 | Launch | ||
262,144.0 tensor |
4 | $19.23 | 2.268 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
262,144.0 tensor |
8 | $37.37 | 2.242 | Launch | ||
262,144.0 tensor |
8 | $37.37 | 2.242 | Launch | ||
There are no configurations for this model, context and quantization yet.
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.