Models

  • Our catalog features the most popular open-source AI models from developers worldwide, including large language models (LLMs), multimodal, and diffusion models. Try any model in one place — we’ve made it easy for you.
  • To explore and test a model, you can query it through our public endpoint. For production use, fine-tuning, or custom weights, we recommend renting a virtual or a dedicated GPU server.

DeepSeek-R1-0528-Qwen3-8B

DeepSeek-R1-0528-Qwen3-8B is a compact model based on Qwen3 with 8 billion parameters, distilled from the flagship version DeepSeek-R1-0528. It achieves state-of-the-art (SOTA) results among open-source models in its category. The model is ideally suited for deployment in resource-constrained environments while retaining advanced mathematical and logical reasoning capabilities from the teacher model.

28.05.2025

VisualClozePipeline-384

VisualClozePipeline-384 is model for image generation with visual context.

15.05.2025

Phi-4-reasoning

Phi-4-Reasoning is a compact 14-billion-parameter reasoning model that confidently competes with much larger models in mathematics, programming, and scientific tasks. The model is ideally suited for educational and research applications where high-quality logical reasoning is required while efficiently utilizing computational resources.

reasoning
30.04.2025

Qwen3-235B-A22B

Qwen3-235B-A22B is the flagship open-source MoE model with 235 billion total parameters (22 billion active) and a context length of 128K tokens, delivering quality on par with the best proprietary models. The model is designed for mission-critical government systems, fundamental research, and flagship products where the highest level of modern AI quality is required.

reasoning
29.04.2025

Qwen3-0.6B

Qwen3-0.6B is an ultra-compact language model with 600 million parameters and a 32K token context window, optimized for mobile devices and edge computing. The model delivers fast inference with minimal resource consumption and is ideal for IoT applications.

reasoning
29.04.2025

Qwen3-1.7B

Qwen3-1.7B is a balanced model with 1.7 billion parameters and a 32K token context window, optimized for basic enterprise applications. It delivers high-quality dialogue and document analysis with moderate resource requirements, making it ideal for business chatbots and customer service automation systems.

reasoning
29.04.2025

Qwen3-4B

Qwen3-4B is a compact 4-billion-parameter model featuring an extended 32K token context window. Remarkably, developers claim its performance rivals that of the much larger Qwen2.5-72B-Instruct. This model is particularly well-suited for analytical tasks, technical documentation processing, and report generation.

reasoning
29.04.2025

Qwen3-8B

Qwen3-8B is the most frequently downloaded model in the Qwen3 series on Hugging Face. It supports switching between thinking modes and delivers the best performance in its scale, significantly surpassing Qwen2.5-7B in overall capabilities.

reasoning
29.04.2025

Qwen3-30B-A3B

Qwen3-30B-A3B is an advanced MoE (Mixture of Experts) model with a hybrid architecture that allows enabling or disabling reasoning mode as needed for flexible handling of tasks of varying complexity. With 30.5 billion parameters and dynamic activation of only 3.3 billion per token, along with support for context lengths of up to 128K, the model combines the quality of a large language model with the speed and efficiency of a smaller one.

reasoning
29.04.2025

Qwen3-14B

Qwen3-14B is a model with 14 billion parameters and a context window of 128K tokens, delivering performance comparable to flagship solutions. It is ideally suited for tasks requiring expert-level analysis and content generation with heightened attention to detail.

reasoning
29.04.2025

Qwen3-32B

Qwen3-32B — the flagship dense model with 32 billion parameters and a context window of 128K tokens, designed for mission-critical AI systems. It delivers state-of-the-art quality in the most complex tasks and is ideal for building advanced AI products.

reasoning
try online
29.04.2025

GLM-Z1-32B-0414

GLM-Z1-32B-0414 is a specialized reasoning model with 32B parameters and a 32K context length, trained through extended reinforcement learning (RL) to solve complex mathematical and logical problems. It is ideally suited for educational platforms, scientific research, and the development of systems requiring step-by-step analysis and solution justification.

reasoning
14.04.2025

GLM-Z1-9B-0414

GLM-Z1-9B-0414 is a compact reasoning model with 9.4 billion parameters. Despite its relatively small size, it demonstrates impressive step-by-step reasoning capabilities when performing general simple tasks. Thanks to an excellent balance between efficiency and performance, it is ideally suited for deployment in resource-constrained environments.

reasoning
14.04.2025

GLM-Z1-Rumination-32B-0414

GLM-Z1-Rumination-32B-0414 is a reasoning-capable model with 32 billion parameters, specifically trained to solve complex research and analytical tasks, with the ability to use external search. It excels at engaging in prolonged deliberation, allowing it to effectively handle multi-step assignments.

reasoning
14.04.2025

GLM-4-32B-0414

GLM-4-32B-0414 is a powerful model with 32 billion parameters, trained on 15 TB of high-quality data. In terms of performance, it is comparable to leading models such as GPT-4o and DeepSeek-V3-0324, particularly in programming tasks, while remaining lightweight for easy local deployment.  

14.04.2025

Llama 4 Scout

Llama 4 Scout is a model with native multimodality and a context window of up to 10 million tokens, while running on a single GPU. It is ideal for analyzing large text arrays and quickly extracting information from images.

multimodal
05.04.2025

Llama 4 Maverick

Llama 4 Maverick - supports context windowing up to 1 million tokens, native multimodality and demonstrates high speed and efficiency due to the combination of 128 experts and 400 billion parameters in the architecture. The model is well suited for programming and technical documentation tasks.

multimodal
05.04.2025

YandexGPT-5-Lite-8B

A specialized Russian-language model with 8B parameters and a 32k token context, trained entirely from scratch on Russian and English data. Thanks to optimized tokenization and innovative training techniques, the model outperforms similar-sized solutions like Llama and Qwen, especially in tasks related to Russian culture and language.

31.03.2025

DeepSeek-V3-0324

DeepSeek-V3 0324 is an enhanced version of DeepSeek's powerful and popular MoE model with 685 billion parameters. Demonstrates exceptional quality, deep answer mining, and outstanding erudition in tasks ranging from analyzing complex legal documents to generating executable program code from scratch.

24.03.2025

Gemma-3-27B

Gemma 3 27B — flagship multimodal model from Google DeepMind with 27 billion parameters and maximum performance. It is easy to fine-tune and ideal for a wide range of complex research tasks and high-end enterprise solutions.

multimodal
try online
12.03.2025