T-Pro 2.0 is a new Russian large language model featuring a hybrid reasoning mode. It is built upon the Qwen3-32B architecture but incorporates a completely redesigned tokenization system and fine-tuning methodology. The model contains 32 billion parameters and supports a context length of up to 32,768 tokens, which can be extended to 128,000 using RoPE scaling. The hybrid reasoning mode allows the model to dynamically switch between fast responses for simple queries and deep, multi-step analysis for complex tasks. One of the key technical innovations is the new approach to Russian language tokenization. The model uses 30% fewer tokens to convey the same meaning compared to the original Qwen3. This was achieved by expanding the Cyrillic portion of the vocabulary more than fivefold while keeping the overall size of the tokenizer unchanged. Additionally, the model incorporates speculative decoding using the EAGLE architecture, which enables the prediction of multiple tokens at once, nearly doubling the speed of text generation.The training process included three stages: pre-training on 40 billion instructional tokens, of which one-third were reasoning-focused, followed by supervised fine-tuning on approximately 500,000 high-quality instructions, and finally preference tuning on around 100,000 carefully selected examples.
Benchmark results show that T-Pro 2.0 leads among open models with approximately 32 billion parameters, achieving top scores on MERA (0.660), ruMMLU (0.790), and Ru Arena Hard (0.876), significantly outperforming Qwen3-32B and other competing models. In reasoning benchmarks, it performs exceptionally well on the Russian versions of AIME (0.646) and in its own T-Math benchmark (0.799), which evaluates performance on mathematical olympiad problems. Particularly impressive were the results in dialogue-based evaluations, where T-Pro 2.0 reached 87.6% accuracy in Ru Arena Hard when using the reasoning mode.
The model's computational efficiency is remarkable, offering twice the resource savings compared to Chinese models such as Qwen3 and DeepSeek R1-Distil for Russian-language tasks. T-Pro 2.0 opens up vast opportunities for automation and the creation of next-generation intelligent agents. In enterprise environments, it excels at handling complex customer requests, automating routine office workflows, generating and analyzing documents, and writing technical code. The model is available under the Apache 2.0 license, allowing full freedom for use, modification, and further training on proprietary data. It is compatible with popular frameworks such as SGLang, HuggingFace Transformers, and vLLM, ensuring smooth integration into existing infrastructures. T-Pro 2.0 combines top-tier performance with practical deployment, offering a Russian-developed alternative to international models with complete control over the underlying technology.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 32768 | 160 | 2 | $0.57 | Launch | ||
16 | 32768 | 160 | 2 | $0.80 | Launch | ||
16 | 65536 | 160 | 2 | $0.93 | Launch | ||
16 | 65536 | 160 | 3 | $0.95 | Launch | ||
12 | 65536 | 160 | 1 | $1.20 | Launch | ||
16 | 65536 | 160 | 3 | $1.43 | Launch | ||
16 | 65536 | 160 | 1 | $1.59 | Launch | ||
16 | 65536 | 160 | 2 | $1.67 | Launch | ||
16 | 65536 | 160 | 2 | $2.19 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 65536 | 160 | 2 | $0.93 | Launch | ||
16 | 65536 | 160 | 4 | $1.18 | Launch | ||
16 | 65536 | 160 | 4 | $1.48 | Launch | ||
16 | 65536 | 160 | 2 | $1.67 | Launch | ||
16 | 65536 | 160 | 2 | $2.19 | Launch | ||
16 | 65535 | 240 | 2 | $2.22 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 2 | $2.93 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 131072 | 160 | 4 | $1.75 | Launch | ||
16 | 131072 | 160 | 1 | $2.71 | Launch | ||
16 | 131072 | 160 | 4 | $3.23 | Launch | ||
16 | 131072 | 160 | 4 | $4.26 | Launch | ||
16 | 98304 | 160 | 3 | $4.34 | Launch | ||
16 | 131072 | 160 | 1 | $5.23 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.