Models

  • Our catalog features the most popular open-source AI models from developers worldwide, including large language models (LLMs), multimodal, and diffusion models. Try any model in one place—we’ve made it easy for you.
  • To explore and test a model, you can run it on our public endpoint. For production use, fine-tuning, or custom weights, we recommend choosing either: a private endpoint, or a dedicated cloud server.

Stable Diffusion 3.5 Medium

Stable Diffusion 3.5 Medium is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

29.10.2024

Stable Diffusion 3.5 Large

Stable Diffusion 3.5 Large is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

22.10.2024

Stable Diffusion 3.5 Large Turbo

Stable Diffusion 3.5 Large Turbo is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

22.10.2024

Qwen2.5-7B

Qwen2.5-7B: A versatile 7-billion-parameter model with a 128K token context window. It excels at complex tasks including structured data processing, delivers high accuracy, and is particularly well-suited for business assistants and automated reporting systems.

19.09.2024

Qwen2.5-0.5B

Qwen2.5-0.5B is an ultra-compact model with 500 million parameters, optimized for rapid deployment and handling basic tasks on devices with minimal resources. It is ideal for chatbots, mobile applications, and embedded systems where low power consumption and high processing speed are critical.

19.09.2024

Qwen2.5-1.5B

Qwen2.5-1.5B – A lightweight 1.5 billion parameter model with strong language capabilities and an optimal size/performance balance. Optimized for basic document processing, summarization, and deployment in mobile devices or embedded AI applications.

19.09.2024

Qwen2.5-72B

Qwen2.5-72B is the flagship open-weight model featuring 72 billion parameters and 128K context length, delivering state-of-the-art performance competitive with models five times its size. Designed for projects demanding the highest quality in AI solutions.

19.09.2024

Qwen2.5-32B

Qwen2.5-32B is a 32B parameter model with a 128K context window, offering top-tier performance for complex enterprise and research tasks. It is ideal for legal, scientific, and large-scale content analysis

19.09.2024

Qwen2.5-14B

Qwen2.5-14B utilizes 14 billion parameters and processes a 128K-token context window, maintaining the speed of lightweight models while adding the high performance and accuracy of mid-sized ones. It is ideally suited for knowledge management systems and comprehensive industry-specific AI solutions.

19.09.2024

Qwen2.5-3B

Qwen2.5-3B, with 3 billion parameters and 32K context support, fills an important niche between small and medium-sized models in the series. It is ideally suited for research projects, prototyping, and developing specialized solutions with an optimal balance of performance and resource efficiency.

19.09.2024

FLUX.1-schnell

FLUX.1 [schnell] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. 

try online
01.08.2024

FLUX.1-dev

FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. 

try online
01.08.2024

Qwen2-57B-A14B

Qwen2-57B-A14B is a multilingual MoE model with 57 billion parameters, optimized for complex text generation tasks in question-answering systems, analytics, and programming, with high resource efficiency and computational effectiveness.

27.07.2024

Qwen2-72B

Qwen2-72B is the flagship model of the second series, featuring 72 billion parameters and a context window of 128K tokens, delivering performance on par with leading proprietary models. The model is suitable for the most demanding and accuracy-critical applications.

24.07.2024

Qwen2-7B

Qwen2-7B is a 7-billion-parameter model that delivers high performance and accuracy. It efficiently runs on mid-range GPUs and serves as a foundation for developing specialized solutions across various domains.

24.07.2024

Qwen2-1.5B

Qwen2-1.5B is a lightweight, balanced model with 1.5 billion parameters, designed for basic tasks on local machines and small servers. It delivers solid performance in text generation, summarization, and translation while maintaining moderate resource requirements.

24.07.2024

Qwen2-0.5B

Qwen2-0.5B is an ultra-compact model with 0.5 billion parameters and a 32K context window, optimized for deployment on mobile devices and IoT systems. It is suitable for building simple applications and text autocompletion systems.

24.07.2024

Llama-3.1-8B

An incredibly popular multilingual model in the community, trained on 15 trillion tokens, with 8 billion parameters and a context window of 128K. The model is adapted to solve a wide range of tasks, supports function calling, and is ideally suited for building intelligent dialogue systems, software assistants, and agent applications.

23.07.2024

Phi-3.5-mini

Phi-3.5-mini is a compact and highly efficient language model capable of running on mobile and edge devices, delivering generation quality comparable to that of larger models. Thanks to optimized training on high-quality data and multilingual support, it is ideal for chatbots, educational applications, and tasks with limited computational resources.

23.04.2024