FLUX.1 Depth [dev] is a 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image.
FLUX.1 Canny [dev] is 12 billion parameter rectified flow transformer capable of generating an image based on a text description while following the structure of a given input image.
Shuttle 3 Diffusion is a text-to-image generation model designed to create detailed and diverse images in just four steps. It offers enhanced image quality, understanding of complex prompts, efficient resource usage, and increased detail.
CogVideoX1.5-5B is an open-source text-to-video generation model analogous to the commercial model QingYing. It is designed to create video based on text prompts, supports the English language, and also offers image-to-video generation (version CogVideoX1.5-5B-I2V). The model is available on platforms such as Hugging Face, ModelScope, and WiseModel.
Stable Diffusion 3.5 Medium is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
Mochi-1 is a state-of-the-art open-source text-to-video generation model developed by Genmo. It achieves high-fidelity motion and strong prompt adherence in preliminary evaluations, significantly narrowing the gap between closed and open video generation systems.
Stable Diffusion 3.5 Large is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
Stable Diffusion 3.5 Large Turbo is a Multimodal Diffusion Transformer with improvements (MMDiT-X) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.
Qwen2.5-72B is the flagship open-weight model featuring 72 billion parameters and 128K context length, delivering state-of-the-art performance competitive with models five times its size. Designed for projects demanding the highest quality in AI solutions.
Qwen2.5-32B is a 32B parameter model with a 128K context window, offering top-tier performance for complex enterprise and research tasks. It is ideal for legal, scientific, and large-scale content analysis
Qwen2.5-14B utilizes 14 billion parameters and processes a 128K-token context window, maintaining the speed of lightweight models while adding the high performance and accuracy of mid-sized ones. It is ideally suited for knowledge management systems and comprehensive industry-specific AI solutions.
Qwen2.5-0.5B is an ultra-compact model with 500 million parameters, optimized for rapid deployment and handling basic tasks on devices with minimal resources. It is ideal for chatbots, mobile applications, and embedded systems where low power consumption and high processing speed are critical.
Qwen2.5-7B: A versatile 7-billion-parameter model with a 128K token context window. It excels at complex tasks including structured data processing, delivers high accuracy, and is particularly well-suited for business assistants and automated reporting systems.
Qwen2.5-3B, with 3 billion parameters and 32K context support, fills an important niche between small and medium-sized models in the series. It is ideally suited for research projects, prototyping, and developing specialized solutions with an optimal balance of performance and resource efficiency.
Qwen2.5-1.5B – A lightweight 1.5 billion parameter model with strong language capabilities and an optimal size/performance balance. Optimized for basic document processing, summarization, and deployment in mobile devices or embedded AI applications.
FLUX.1 [schnell] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions.
Qwen2-57B-A14B is a multilingual MoE model with 57 billion parameters, optimized for complex text generation tasks in question-answering systems, analytics, and programming, with high resource efficiency and computational effectiveness.