T-pro-2.0

reasoning

T-Pro 2.0 is a new Russian large language model featuring a hybrid reasoning mode. It is built upon the Qwen3-32B architecture but incorporates a completely redesigned tokenization system and fine-tuning methodology. The model contains 32 billion parameters and supports a context length of up to 32,768 tokens, which can be extended to 128,000 using RoPE scaling. The hybrid reasoning mode allows the model to dynamically switch between fast responses for simple queries and deep, multi-step analysis for complex tasks. One of the key technical innovations is the new approach to Russian language tokenization. The model uses 30% fewer tokens to convey the same meaning compared to the original Qwen3. This was achieved by expanding the Cyrillic portion of the vocabulary more than fivefold while keeping the overall size of the tokenizer unchanged. Additionally, the model incorporates speculative decoding using the EAGLE architecture, which enables the prediction of multiple tokens at once, nearly doubling the speed of text generation.The training process included three stages: pre-training on 40 billion instructional tokens, of which one-third were reasoning-focused, followed by supervised fine-tuning on approximately 500,000 high-quality instructions, and finally preference tuning on around 100,000 carefully selected examples.

Benchmark results show that T-Pro 2.0 leads among open models with approximately 32 billion parameters, achieving top scores on MERA (0.660), ruMMLU (0.790), and Ru Arena Hard (0.876), significantly outperforming Qwen3-32B and other competing models. In reasoning benchmarks, it performs exceptionally well on the Russian versions of AIME (0.646) and in its own T-Math benchmark (0.799), which evaluates performance on mathematical olympiad problems. Particularly impressive were the results in dialogue-based evaluations, where T-Pro 2.0 reached 87.6% accuracy in Ru Arena Hard when using the reasoning mode.

The model's computational efficiency is remarkable, offering twice the resource savings compared to Chinese models such as Qwen3 and DeepSeek R1-Distil for Russian-language tasks. T-Pro 2.0 opens up vast opportunities for automation and the creation of next-generation intelligent agents. In enterprise environments, it excels at handling complex customer requests, automating routine office workflows, generating and analyzing documents, and writing technical code. The model is available under the Apache 2.0 license, allowing full freedom for use, modification, and further training on proprietary data. It is compatible with popular frameworks such as SGLang, HuggingFace Transformers, and vLLM, ensuring smooth integration into existing infrastructures. T-Pro 2.0 combines top-tier performance with practical deployment, offering a Russian-developed alternative to international models with complete control over the underlying technology.


Announce Date: 18.07.2025
Parameters: 32.8B
Context: 32K
Attention Type: Full or Sliding Window Attention
VRAM requirements: 23.3 GB using 4 bits quantization
Developer: T-Bank
Transformers Version: 4.51.3
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore T-pro-2.0 capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting T-pro-2.0

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
teslat4-2.16.32.160 16 32768 160 2 $0.80 Launch
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-3.16.64.160 16 65536 160 3 $0.95 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx3080-3.16.64.160 16 65536 160 3 $1.43 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-4.16.64.160 16 65536 160 4 $1.18 Launch
teslat4-4.16.64.160 16 65536 160 4 $1.48 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslav100-2.16.64.240 16 65535 240 2 $2.22 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160 16 65536 160 2 $2.93 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-4.16.128.160 16 131072 160 4 $1.75 Launch
teslaa100-1.16.128.160 16 131072 160 1 $2.71 Launch
rtx3090-4.16.128.160 16 131072 160 4 $3.23 Launch
rtx4090-4.16.128.160 16 131072 160 4 $4.26 Launch
rtx5090-3.16.96.160 16 98304 160 3 $4.34 Launch
teslah100-1.16.128.160 16 131072 160 1 $5.23 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.