Models

  • Our catalog features the most popular open-source AI models from developers worldwide, including large language models (LLMs), multimodal, and diffusion models. Try any model in one place — we’ve made it easy for you.
  • To explore and test a model, you can query it through our public endpoint. For production use, fine-tuning, or custom weights, we recommend renting a virtual or a dedicated GPU server.

GLM-4.5

A hybrid model with 355B parameters, combining advanced reasoning, programming with artifact generation, and agent capabilities within a unified MoE architecture featuring an increased number of hidden layers. At launch, the model ranks 3rd globally in average score across 12 key benchmarks. Particularly impressive are its abilities in generating complete web applications, interactive presentations, and complex code. Users need only describe to the model how the program should function and what outcome they expect.

reasoning
28.07.2025

Qwen3-235B-A22B-Thinking-2507

The new flagship MoE model Qwen3-235B-A22B in the Qwen 3 series features enhanced "thinking" capabilities and an extended context length of 262K tokens. Operating exclusively in thinking mode, it achieves state-of-the-art performance among leading open and proprietary thinking models, surpassing many well-known brands in mathematical computations, programming, and logical reasoning tasks. An ideal choice for complex research tasks requiring advanced agent and analytical capabilities.

reasoning
25.07.2025

Qwen3-Coder-30B-A3B-Instruct

A compact MoE model with an architecture of 30.5B total parameters, of which only 3.3B are activated per token, specifically designed to assist in writing software code. The model features agent-like capabilities, supports a context length of 262,144 tokens, and demonstrates excellent performance at relatively low computational cost. These qualities make it an ideal choice for use as a programming assistant, a QA system within programming education platforms, and for integration into tools featuring code autocompletion.

22.07.2025

Qwen3-Coder-480B-A35B-Instruct

Alibaba's flagship agent-based programming model featuring a Mixture-of-Experts architecture (480 billion total parameters, 35 billion active parameters) with native support for a 256K-token context. Qwen3-Coder's application scenarios cover the entire spectrum of modern software development—from building interactive web applications to modernizing legacy systems—including autonomous feature development spanning backend APIs, frontend components, and databases.

22.07.2025

Qwen3-235B-A22B-Instruct-2507

The updated flagship MoE model Qwen3, with 235B parameters (22B active), features a native context length of 256K and supports 119 languages. In its implementation, developers have abandoned the hybrid mode, so the model supports only non-thinking mode. However, improved refinement enables the model to significantly outperform competitors, delivering exceptional results in mathematics, programming, and logical reasoning. Furthermore, the FP8 version allows industrial-scale deployment with a 50% memory saving.

try online
21.07.2025

T-pro-2.0

The first Russian language model with 32 billion parameters and a hybrid reasoning mode, combining revolutionary efficiency in processing the Russian language with the ability for deep analytical thinking to solve tasks of any complexity. The model provides twice the computational resource savings compared to foreign counterparts while delivering superior performance, opening new possibilities for autonomous AI agents.

reasoning
18.07.2025

Kimi-K2

An enormous MoE model containing 1 trillion parameters. The model is specifically designed for autonomous execution of complex tasks, tool usage, and interaction with external systems. Kimi K2 doesn't simply answer questions—it takes action. It represents a new generation of AI assistants capable of independently planning, executing, and monitoring multi-step processes without constant human involvement. This is precisely why developers recommend using the model in agent-based systems.

11.07.2025

MiniMax-M1-80k

Powerful reasoning with maximum capabilities and minimal resource consumption. 456B parameters, a context window of 1,000,000 tokens, Lightning Attention — a novel approach to the attention mechanism, and an increased reasoning budget of 80,000 tokens.
This is ultimate performance for tackling the most complex research and product challenges in mathematics, programming, bioinformatics, law, finance, and beyond.

reasoning
16.06.2025

MiniMax-M1-40k

A large MoE model with 456B parameters, a massive context window of 1,000,000 tokens, and a reasoning budget of 40,000 tokens. Thanks to architectural innovations, the model is more resource-efficient compared to models of similar size, making it highly effective for a wide range of intelligent analysis tasks and agent-based applications.

reasoning
16.06.2025

DeepSeek-R1-0528

DeepSeek-R1-0528 is the first major update to the popular DeepSeek R1 series, released on May 28, 2025. The developers revised their approach to depth of thought, and the number of parameters increased to 685 billion, resulting in an improvement of more than 10 percentage points across nearly all significant benchmarks compared to the version released on January 22, 2025.

reasoning
28.05.2025

DeepSeek-R1-0528-Qwen3-8B

DeepSeek-R1-0528-Qwen3-8B is a compact model based on Qwen3 with 8 billion parameters, distilled from the flagship version DeepSeek-R1-0528. It achieves state-of-the-art (SOTA) results among open-source models in its category. The model is ideally suited for deployment in resource-constrained environments while retaining advanced mathematical and logical reasoning capabilities from the teacher model.

28.05.2025

VisualClozePipeline-384

VisualClozePipeline-384 is model for image generation with visual context.

15.05.2025

Phi-4-reasoning

Phi-4-Reasoning is a compact 14-billion-parameter reasoning model that confidently competes with much larger models in mathematics, programming, and scientific tasks. The model is ideally suited for educational and research applications where high-quality logical reasoning is required while efficiently utilizing computational resources.

reasoning
30.04.2025

Qwen3-235B-A22B

Qwen3-235B-A22B is the flagship open-source MoE model with 235 billion total parameters (22 billion active) and a context length of 128K tokens, delivering quality on par with the best proprietary models. The model is designed for mission-critical government systems, fundamental research, and flagship products where the highest level of modern AI quality is required.

reasoning
29.04.2025

Qwen3-0.6B

Qwen3-0.6B is an ultra-compact language model with 600 million parameters and a 32K token context window, optimized for mobile devices and edge computing. The model delivers fast inference with minimal resource consumption and is ideal for IoT applications.

reasoning
29.04.2025

Qwen3-1.7B

Qwen3-1.7B is a balanced model with 1.7 billion parameters and a 32K token context window, optimized for basic enterprise applications. It delivers high-quality dialogue and document analysis with moderate resource requirements, making it ideal for business chatbots and customer service automation systems.

reasoning
29.04.2025

Qwen3-4B

Qwen3-4B is a compact 4-billion-parameter model featuring an extended 32K token context window. Remarkably, developers claim its performance rivals that of the much larger Qwen2.5-72B-Instruct. This model is particularly well-suited for analytical tasks, technical documentation processing, and report generation.

reasoning
29.04.2025

Qwen3-8B

Qwen3-8B is the most frequently downloaded model in the Qwen3 series on Hugging Face. It supports switching between thinking modes and delivers the best performance in its scale, significantly surpassing Qwen2.5-7B in overall capabilities.

reasoning
29.04.2025

Qwen3-30B-A3B

Qwen3-30B-A3B is an advanced MoE (Mixture of Experts) model with a hybrid architecture that allows enabling or disabling reasoning mode as needed for flexible handling of tasks of varying complexity. With 30.5 billion parameters and dynamic activation of only 3.3 billion per token, along with support for context lengths of up to 128K, the model combines the quality of a large language model with the speed and efficiency of a smaller one.

reasoning
29.04.2025

Qwen3-14B

Qwen3-14B is a model with 14 billion parameters and a context window of 128K tokens, delivering performance comparable to flagship solutions. It is ideally suited for tasks requiring expert-level analysis and content generation with heightened attention to detail.

reasoning
29.04.2025