T-pro-it-2.0

reasoning

T-Pro 2.0 is a new Russian large language model featuring a hybrid reasoning mode. It is built upon the Qwen3-32B architecture but incorporates a completely redesigned tokenization system and fine-tuning methodology. The model contains 32 billion parameters and supports a context length of up to 40K tokens, which can be extended to 128,000 using RoPE scaling. The hybrid reasoning mode allows the model to dynamically switch between fast responses for simple queries and deep, multi-step analysis for complex tasks. One of the key technical innovations is the new approach to Russian language tokenization. The model uses 30% fewer tokens to convey the same meaning compared to the original Qwen3. This was achieved by expanding the Cyrillic portion of the vocabulary more than fivefold while keeping the overall size of the tokenizer unchanged. Additionally, the model incorporates speculative decoding using the EAGLE architecture, which enables the prediction of multiple tokens at once, nearly doubling the speed of text generation.The training process included three stages: pre-training on 40 billion instructional tokens, of which one-third were reasoning-focused, followed by supervised fine-tuning on approximately 500,000 high-quality instructions, and finally preference tuning on around 100,000 carefully selected examples.

Benchmark results show that T-Pro 2.0 leads among open models with approximately 32 billion parameters, achieving top scores on MERA (0.660), ruMMLU (0.790), and Ru Arena Hard (0.876), significantly outperforming Qwen3-32B and other competing models. In reasoning benchmarks, it performs exceptionally well on the Russian versions of AIME (0.646) and in its own T-Math benchmark (0.799), which evaluates performance on mathematical olympiad problems. Particularly impressive were the results in dialogue-based evaluations, where T-Pro 2.0 reached 87.6% accuracy in Ru Arena Hard when using the reasoning mode.

The model's computational efficiency is remarkable, offering twice the resource savings compared to Chinese models such as Qwen3 and DeepSeek R1-Distil for Russian-language tasks. T-Pro 2.0 opens up vast opportunities for automation and the creation of next-generation intelligent agents. In enterprise environments, it excels at handling complex customer requests, automating routine office workflows, generating and analyzing documents, and writing technical code. The model is available under the Apache 2.0 license, allowing full freedom for use, modification, and further training on proprietary data. It is compatible with popular frameworks such as SGLang, HuggingFace Transformers, and vLLM, ensuring smooth integration into existing infrastructures. T-Pro 2.0 combines top-tier performance with practical deployment, offering a Russian-developed alternative to international models with complete control over the underlying technology.


Announce Date: 18.07.2025
Parameters: 33B
Context: 41K
Layers: 64
Attention Type: Full or Sliding Window Attention
Developer: T-Bank
Transformers Version: 4.51.3
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore T-pro-it-2.0 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting T-pro-it-2.0

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
40,960.0
pipeline
3 $0.88 1.638 Launch
teslaa10-2.16.64.160
40,960.0
tensor
2 $0.93 1.888 Launch
teslat4-4.16.64.160
40,960.0
tensor
4 $0.96 2.828 Launch
teslaa2-3.32.128.160
40,960.0
pipeline
3 $1.06 1.638 Launch
rtx2080ti-4.16.32.160
40,960.0
tensor
4 $1.12 1.028 Launch
rtxa5000-2.16.64.160.nvlink
40,960.0
tensor
2 $1.23 1.888 Launch
teslaa2-4.32.128.160
40,960.0
tensor
4 $1.26 2.828 Launch
rtx3090-2.16.64.160
40,960.0
tensor
2 $1.56 1.888 Launch
rtx4090-2.16.64.160
40,960.0
tensor
2 $1.92 1.888 Launch
teslav100-2.16.64.240
40,960.0
tensor
2 $2.22 3.328 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 5.018 Launch
rtx5090-2.16.64.160
40,960.0
tensor
2 $2.93 3.328 Launch
h100-1.16.64.160
40,960.0
1 $3.83 5.018 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 6.278 Launch
h200-1.16.128.160
40,960.0
1 $4.74 10.508 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
40,960.0
tensor
4 $0.96 1.327 Launch
teslaa2-4.32.128.160
40,960.0
tensor
4 $1.26 1.327 Launch
teslaa10-3.16.96.160
40,960.0
pipeline
3 $1.34 2.297 Launch
teslaa10-4.12.48.160
40,960.0
tensor
4 $1.57 4.207 Launch
teslav100-2.16.64.240
40,960.0
tensor
2 $2.22 1.827 Launch
rtx3090-3.16.96.160
40,960.0
pipeline
3 $2.29 2.297 Launch
rtxa5000-4.16.128.160.nvlink
40,960.0
tensor
4 $2.34 4.207 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 3.517 Launch
rtx4090-3.16.96.160
40,960.0
pipeline
3 $2.83 2.297 Launch
rtx3090-4.16.64.160
40,960.0
tensor
4 $2.89 4.207 Launch
rtx5090-2.16.64.160
40,960.0
tensor
2 $2.93 1.827 Launch
rtx4090-4.16.64.160
40,960.0
tensor
4 $3.60 4.207 Launch
h100-1.16.64.160
40,960.0
1 $3.83 3.517 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 4.777 Launch
h200-1.16.128.160
40,960.0
1 $4.74 9.007 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.128.240
40,960.0
tensor
4 $1.76 1.083 Launch
rtx3090-4.16.96.320
40,960.0
tensor
4 $2.97 1.083 Launch
rtx4090-4.16.96.320
40,960.0
tensor
4 $3.68 1.083 Launch
teslav100-3.64.256.320
40,960.0
pipeline
3 $3.89 1.333 Launch
h100nvl-1.16.96.240
40,960.0
1 $4.12 1.653 Launch
rtx5090-3.16.96.240
40,960.0
pipeline
3 $4.35 1.333 Launch
teslav100-4.32.256.320
40,960.0
tensor
4 $4.68 3.963 Launch
h200-1.16.128.240
40,960.0
1 $4.74 5.883 Launch
teslaa100-2.24.256.240
40,960.0
tensor
2 $4.93 7.343 Launch
rtx5090-4.16.128.320
40,960.0
tensor
4 $5.76 3.963 Launch
h100-2.24.256.240
40,960.0
tensor
2 $7.85 7.343 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.