Models

  • Our catalog features the most popular open-source AI models from developers worldwide, including large language models (LLMs), multimodal, and diffusion models. Try any model in one place — we’ve made it easy for you.
  • To explore and test a model, you can query it through our public endpoint. For production use, fine-tuning, or custom weights, we recommend renting a virtual or a dedicated GPU server.

Qwen3-32B

Qwen3-32B — the flagship dense model with 32 billion parameters and a context window of 128K tokens, designed for mission-critical AI systems. It delivers state-of-the-art quality in the most complex tasks and is ideal for building advanced AI products.

reasoning
29.04.2025

GLM-Z1-32B-0414

GLM-Z1-32B-0414 is a specialized reasoning model with 32B parameters and a 32K context length, trained through extended reinforcement learning (RL) to solve complex mathematical and logical problems. It is ideally suited for educational platforms, scientific research, and the development of systems requiring step-by-step analysis and solution justification.

reasoning
14.04.2025

GLM-Z1-9B-0414

GLM-Z1-9B-0414 is a compact reasoning model with 9.4 billion parameters. Despite its relatively small size, it demonstrates impressive step-by-step reasoning capabilities when performing general simple tasks. Thanks to an excellent balance between efficiency and performance, it is ideally suited for deployment in resource-constrained environments.

reasoning
14.04.2025

GLM-Z1-Rumination-32B-0414

GLM-Z1-Rumination-32B-0414 is a reasoning-capable model with 32 billion parameters, specifically trained to solve complex research and analytical tasks, with the ability to use external search. It excels at engaging in prolonged deliberation, allowing it to effectively handle multi-step assignments.

reasoning
14.04.2025

GLM-4-32B-0414

GLM-4-32B-0414 is a powerful model with 32 billion parameters, trained on 15 TB of high-quality data. In terms of performance, it is comparable to leading models such as GPT-4o and DeepSeek-V3-0324, particularly in programming tasks, while remaining lightweight for easy local deployment.  

14.04.2025

Llama 4 Scout

Llama 4 Scout is a model with native multimodality and a context window of up to 10 million tokens, while running on a single GPU. It is ideal for analyzing large text arrays and quickly extracting information from images.

multimodal
05.04.2025

Llama 4 Maverick

Llama 4 Maverick - supports context windowing up to 1 million tokens, native multimodality and demonstrates high speed and efficiency due to the combination of 128 experts and 400 billion parameters in the architecture. The model is well suited for programming and technical documentation tasks.

multimodal
05.04.2025

YandexGPT-5-Lite-8B

A specialized Russian-language model with 8B parameters and a 32k token context, trained entirely from scratch on Russian and English data. Thanks to optimized tokenization and innovative training techniques, the model outperforms similar-sized solutions like Llama and Qwen, especially in tasks related to Russian culture and language.

31.03.2025

DeepSeek-V3-0324

DeepSeek-V3 0324 is an enhanced version of DeepSeek's powerful and popular MoE model with 685 billion parameters. Demonstrates exceptional quality, deep answer mining, and outstanding erudition in tasks ranging from analyzing complex legal documents to generating executable program code from scratch.

24.03.2025

Gemma-3-27B

Gemma 3 27B — flagship multimodal model from Google DeepMind with 27 billion parameters and maximum performance. It is easy to fine-tune and ideal for a wide range of complex research tasks and high-end enterprise solutions.

multimodal
try online
12.03.2025

Gemma-3-1B

Gemma 3 1B — an ultra-compact model with just 1 billion parameters, yet retaining impressive capabilities. It supports a context window of 32K tokens and is ideal for resource-constrained devices and tasks where response speed is critical.

12.03.2025

Gemma-3-12B

Gemma 3 12B is a high-performance multimodal model with 12 billion parameters, a context window of 128K tokens, and multilingual understanding, designed for a wide range of straightforward tasks. It excels at processing long documents, images, and technical content.

multimodal
12.03.2025

Gemma-3-4B

Gemma 3 4B is a compact model that is also multimodal, featuring a context window of 128K tokens and built-in support for more than 35 languages, including Russian. It's an excellent solution for embedded systems and applications processing text and images with limited computational resources.

multimodal
12.03.2025

QwQ

QwQ is a model with 32.5 billion parameters and a context length of 131K tokens, specifically designed for deep reasoning and logical analysis. Its unique ability to perform transparent and structured thinking sets it apart from competitors, delivering high-quality and well-thought-out responses.

reasoning
try online
06.03.2025

Wan2.1-T2V-1.3B-Diffusers

This is a Text-to-Video model with 1.3 billion parameters, developed for generating video from text prompts. The model is optimized for consumer-grade GPUs: it requires 8.19 GB of VRAM, and generating a 5-second video in 480p resolution takes ~4 minutes on an RTX 4090 GPU without optimization.

01.03.2025

Phi-4-multimodal

Phi-4-multimodal is an efficient solution for multimodal tasks with edge deployment support, combining a compact size (5.6B parameters) with the capabilities of large language models. The model is ideal for developing applications with synchronous processing of speech, images, and text on resource-constrained devices.

multimodal
27.02.2025

Qwen2.5-VL-3B

Qwen2.5-VL-3B - is a compact, 3-billion-parameter multimodal model designed for edge deployment, yet it delivers outstanding capabilities in image/video comprehension and agent-based task execution.

multimodal
19.02.2025

Qwen2.5-VL-7B

Qwen2.5-VL-7B is a powerful multimodal model with 7 billion parameters, delivering an optimal balance between high performance and efficiency. Designed for complex document analysis, video stream processing, and agent-based interaction tasks.

multimodal
19.02.2025

Chroma1-HD

Chroma is an 8.9 billion-parameter model based on the FLUX.1-schnell architecture.

27.01.2025

Qwen2.5-7B-1M

Qwen2.5-7B-1M is a compact yet powerful model with 7.6 billion parameters. Thanks to sparse attention technologies, it can process up to one million context tokens at excellent speeds. The model is an ideal solution for organizations requiring high-performance analysis of long documents while optimizing resource usage.

26.01.2025