Models

  • Our catalog features the most popular open-source AI models from developers worldwide, including large language models (LLMs), multimodal, and diffusion models. Try any model in one place — we’ve made it easy for you.
  • To explore and test a model, you can query it through our public endpoint. For production use, fine-tuning, or custom weights, we recommend renting a virtual or a dedicated GPU server.

DeepSeek-R1-Distill-Qwen-32B

DeepSeek-R1-Distill-32B — a model built based on distilling a large MoE reasoning expert-level model, setting new records among open-source dense models. It is suitable for scientific, corporate, and educational platforms with high demands on logic and analysis.

20.01.2025

DeepSeek-R1

DeepSeek-R1 is a unique reasoning model with 671 billion parameters, trained based on reinforcement learning (RL), supporting long chains of thought (CoT), and specializing in multi-step reasoning and logical analysis. It is indispensable for tasks requiring well-founded conclusions and transparent reasoning processes.

reasoning
20.01.2025

DeepSeek-R1-Distill-Qwen-1.5B

DeepSeek-R1-Distill-1.5B — a compact model that, thanks to distillation, possesses strong reasoning capabilities. It is ideal for fast text analysis in mobile and edge applications.

20.01.2025

DeepSeek-V3

DeepSeek-V3 is a powerful MoE model with 671 billion parameters and 16 experts, one of the most popular open-source alternatives capable of competing with commercial analogs. With 128K tokens of context and high generation accuracy, it is ideal for professional tasks - from analyzing complex data to creating high-quality creative content.

26.12.2024

Phi-4

Phi-4 is Microsoft's flagship compact model with 14 billion parameters, designed with a focus on efficiency within a limited context window of 16K tokens. It is optimized for tasks where fast response speed and accuracy are critical in short interactions.

try online
12.12.2024

Llama-3.3-70B

Llama-3.3-70B is a language model supporting 8 languages, featuring a large context window (128k tokens) and high accuracy, making it ideal for assistant and dialogue systems. According to the developers, its performance is on par with Llama 3.1 with 405 billion parameters.

06.12.2024

FLUX.1-Depth-dev

FLUX.1 Depth [dev] is a 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image. 

21.11.2024

FLUX.1-Canny-dev

FLUX.1 Canny [dev] is 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image.

21.11.2024

FLUX.1-Fill-dev

FLUX.1 Fill [dev] is a 12 billion parameter rectified flow transformer capable of filling areas in existing images based on a text description.

21.11.2024

FLUX.1-Kontext-dev

FLUX.1 Kontext [dev] is a 12 billion parameter rectified flow transformer capable of editing images based on text instructions. 

21.11.2024

Shuttle 3 Diffusion

Shuttle 3 Diffusion is a text-to-image generation model designed to create detailed and diverse images in just four steps. It offers enhanced image quality, understanding of complex prompts, efficient resource usage, and increased detail.

12.11.2024

Stable Diffusion 3.5 Medium

Stable Diffusion 3.5 Medium is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

29.10.2024

Stable Diffusion 3.5 Large Turbo

Stable Diffusion 3.5 Large Turbo is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

22.10.2024

Stable Diffusion 3.5 Large

Stable Diffusion 3.5 Large is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

22.10.2024

Qwen2.5-72B

Qwen2.5-72B is the flagship open-weight model featuring 72 billion parameters and 128K context length, delivering state-of-the-art performance competitive with models five times its size. Designed for projects demanding the highest quality in AI solutions.

19.09.2024

Qwen2.5-0.5B

Qwen2.5-0.5B is an ultra-compact model with 500 million parameters, optimized for rapid deployment and handling basic tasks on devices with minimal resources. It is ideal for chatbots, mobile applications, and embedded systems where low power consumption and high processing speed are critical.

19.09.2024

Qwen2.5-1.5B

Qwen2.5-1.5B – A lightweight 1.5 billion parameter model with strong language capabilities and an optimal size/performance balance. Optimized for basic document processing, summarization, and deployment in mobile devices or embedded AI applications.

19.09.2024

Qwen2.5-3B

Qwen2.5-3B, with 3 billion parameters and 32K context support, fills an important niche between small and medium-sized models in the series. It is ideally suited for research projects, prototyping, and developing specialized solutions with an optimal balance of performance and resource efficiency.

19.09.2024

Qwen2.5-7B

Qwen2.5-7B: A versatile 7-billion-parameter model with a 128K token context window. It excels at complex tasks including structured data processing, delivers high accuracy, and is particularly well-suited for business assistants and automated reporting systems.

19.09.2024

Qwen2.5-14B

Qwen2.5-14B utilizes 14 billion parameters and processes a 128K-token context window, maintaining the speed of lightweight models while adding the high performance and accuracy of mid-sized ones. It is ideally suited for knowledge management systems and comprehensive industry-specific AI solutions.

19.09.2024