YandexGPT-5-Lite-8B

YandexGPT-5-Lite-8B-instruct is an 8-billion-parameter language model with a 32k token context window, developed by Yandex specifically for handling Russian-language content. The model is built upon Yandex's own pretrained version of YandexGPT 5 Lite, distinguishing it from many competitors that use weights from third-party models as a starting point. Its training was conducted in two stages: the first on a 15-trillion-token dataset (30% of which was in Russian), and a second Powerup stage on a high-quality 320-billion-token dataset.The model's alignment process incorporates advanced methods like SFT (Supervised Fine-Tuning) and RLHF (Reinforcement Learning from Human Feedback), supplemented by Yandex's proprietary innovation—the LogDPO algorithm. This algorithm addresses the "unlearning" problem associated with traditional DPO approaches. This innovation allows the model to train stably on preferred data without degrading the quality of its responses.

A unique feature of the model is its specialized processing of Russian-language content, including a token dictionary optimized for the Russian language. This ensures more efficient use of computational resources compared to models originally designed for English. The 32k token context of YandexGPT corresponds to a 48k token context in the Qwen-2.5-32B-base model for Russian texts, demonstrating YandexGPT's optimal tokenization for Cyrillic. Another tokenization feature is the replacement of newline characters with special [NL] tokens and the separate processing of each dialogue turn, which creates spaces at the beginning of each message. The model uses a non-standard dialogue template with an Assistant:[SEP] sequence for generating responses and a closing `</s>` token, ensuring correct operation in multi-turn dialogues of any length.

YandexGPT-5-Lite demonstrates outstanding results in key benchmarks, achieving parity with or surpassing models like Llama-3.1-8B-instruct and Qwen-2.5-7B-instruct. The model shows exceptional performance on RuCulture—a specialized benchmark for Russian culture, literature, and slang—where it significantly outperforms international counterparts.

YandexGPT-5-Lite-8B-instruct is ideally suited for creating Russian-language chatbots and virtual assistants, especially in corporate environments that require an understanding of Russian cultural contexts and business practices. Educational platforms can use the model to build intelligent tutors for Russian literature, history, and culture. It is also excellent for content marketing and copywriting in Russian, including creating SEO-optimized texts and adapting content for a Russian audience. Developers and researchers will find the model useful for fine-tuning on specific tasks related to Russian content, as it is natively trained on Russian-language data and will not require significant adaptation.


Announce Date: 31.03.2025
Parameters: 8.04B
Context: 32K
Layers: 32
Attention Type: Full Attention
VRAM requirements: 7.7 GB using 4 bits quantization
Developer: Yandex
Transformers Version: 4.56.1
License: License: yandexgpt-5-lite-8b

Public endpoint

Use our pre-built public endpoints for free to test inference and explore YandexGPT-5-Lite-8B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting YandexGPT-5-Lite-8B

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
32,768.0
16 16384 160 1 $0.33 Launch
teslaa2-1.16.32.160
32,768.0
16 32768 160 1 $0.38 Launch
rtx2080ti-1.16.32.160
32,768.0
16 32768 160 1 $0.41 Launch
teslaa10-1.16.32.160
32,768.0
16 32768 160 1 $0.53 Launch
rtx3080-1.16.32.160
32,768.0
16 32768 160 1 $0.57 Launch
rtx3090-1.16.24.160
32,768.0
16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160
32,768.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
32,768.0
12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160
32,768.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
32,768.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
32,768.0
16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
32,768.0
16 16384 160 1 $0.33 Launch
teslaa2-1.16.32.160
32,768.0
16 32768 160 1 $0.38 Launch
rtx2080ti-1.16.32.160
32,768.0
16 32768 160 1 $0.41 Launch
teslaa10-1.16.32.160
32,768.0
16 32768 160 1 $0.53 Launch
rtx3090-1.16.24.160
32,768.0
16 24576 160 1 $0.88 Launch
rtx3080-2.16.32.160
32,768.0
16 32762 160 2 $0.97 Launch
rtx4090-1.16.32.160
32,768.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
32,768.0
12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160
32,768.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
32,768.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
32,768.0
16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-1.16.32.160
32,768.0
16 32768 160 1 $0.53 Launch
teslat4-2.16.32.160
32,768.0
16 32768 160 2 $0.54 Launch
teslaa2-2.16.32.160
32,768.0
16 32768 160 2 $0.57 Launch
rtx2080ti-2.12.64.160
32,768.0
12 65536 160 2 $0.69 Launch
rtx3090-1.16.24.160
32,768.0
16 24576 160 1 $0.88 Launch
rtx3080-2.16.32.160
32,768.0
16 32762 160 2 $0.97 Launch
rtx4090-1.16.32.160
32,768.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
32,768.0
12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160
32,768.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
32,768.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
32,768.0
16 65536 160 1 $5.11 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.