A model for image editing tasks that ensures high accuracy, quality, and consistency across various scenarios.
A foundational open-source model designed for solving complex tasks and long-running agent scenarios. With an MoE architecture of 754B parameters (40B active), sparse attention (DSA), innovative slime RL infrastructure, and a focus on practical utility, GLM-5 pushes AI interaction far beyond simple chat, transforming it into a full-fledged executive assistant.
An efficient MoE model with 80B parameters (3B active), specifically designed for programming-oriented agents. The model features highly efficient inference, an extended context length (262K tokens), and best-in-class handling of various tool call formats, making it a highly suitable choice for deploying intelligent developer assistants.
It is a native multimodal autoregressive model designed for image generation, supporting both text-to-image and image-to-image (TI2I) tasks. It features a unified architecture for multimodal understanding and generation, achieving performance comparable to leading closed-source models. The model includes two main variants: HunyuanImage-3.0 (text-to-image) and HunyuanImage-3.0-Instruct (enhanced with reasoning capabilities for intelligent prompt improvement and creative editing).
This model is designed for image-to-video generation. It falls under the category of "World Model". The project is licensed under Apache-2.0, ensuring open access to the code and models.
This is the base model of the ⚡️-Image family, designed for high-quality image generation, broad style coverage, and precise alignment with text prompts. It is intended for professional use, creative tasks, and research, in contrast to the accelerated version Z-Image-Turbo.
A 30-billion parameter MoE model with only ~3.6B parameters activated per token, delivering record-breaking performance in its class with minimal resource requirements (~24 GB VRAM). The model leads in agent-based tasks and programming, supports a 200K context, and is optimized for easy local deployment.
It is a 4 billion parameter rectified flow transformer model designed for fast image generation and editing. It unifies text-to-image generation and multi-reference image editing into a single compact architecture, enabling end-to-end inference in under a second. Optimized for real-time applications without compromising quality, it runs on consumer-grade GPUs such as NVIDIA RTX 3090/4070 with approximately 13GB VRAM.
It is a 9 billion parameter rectified flow transformer model designed for high-speed image generation and editing. It unifies text-to-image generation and multi-reference image editing into a single compact architecture, achieving state-of-the-art quality with end-to-end inference in under half a second. The model leverages an 8 billion parameter Qwen3 text embedder and is step-distilled to 4 inference steps, enabling real-time performance while matching or exceeding the quality of models five times its size.
It is a text-to-image and image-to-image generation model employing a hybrid architecture combining an autoregressive generator and a diffusion decoder. It excels in generating high-fidelity images with precise text rendering and semantic understanding, particularly in complex, information-dense scenarios.
An open-source model built on a Mixture-of-Experts architecture with 1 trillion parameters, of which 32 billion are activated per token. The developers have implemented a "visual agentic intelligence" paradigm within it—a combination of visual perception, reasoning, and autonomous agents. The model is multimodal, presented in native INT4 quantization, and includes a unique Agent Swarm mechanism that orchestrates and enables the parallel operation of up to 100 sub-agents. This improves quality and reduces the execution time for complex tasks by an average factor of 4.5.
It is the December 2025 update to Qwen-Image, a text-to-image foundational model. It is designed to generate high-quality images from textual prompts with enhanced capabilities in realism, detail rendering, and text integration.
An advanced MoE model with agentic capabilities, created as an intelligent partner for programming. Its uniqueness lies in its multi-level "thinking" system, which delivers unprecedented stability and control when tackling complex tasks. The ideal choice for development, automation, and programmatic visual content creation.
It is an enhanced image-to-image generation model, succeeding Qwen-Image-Edit-2509.
A model from NVIDIA with 31.6B parameters (3.6B active), specifically optimized for high-performance agentic systems. The model combines a hybrid Mamba-Transformer MoE architecture, delivering simultaneous memory efficiency, high throughput, and reasoning accuracy on contexts up to 1M tokens.
A multimodal model with 106B parameters, using a Mixture-of-Experts (MoE) architecture and a 128K token context. Its key feature is native tool-calling support, enabling it to directly work with images as both input and output, making it an ideal platform for building complex AI agents for document analysis, visual search, and front-end development automation.
A compact 9-billion parameter multimodal model with a 128K token context length and native support for visual Function Calling. Achieves state-of-the-art results on MMBench, MathVista, and OCRBench benchmarks among models of comparable size, optimized for local deployment and agent-based scenarios.
It is the image editing variant of LongCat-Image, supporting bilingual (Chinese-English) editing tasks with state-of-the-art performance among open-source models. It excels in instruction-following capabilities and visual consistency while maintaining high image quality.
It is an open-source, bilingual (Chinese-English) foundation model designed for text-to-image generation. It addresses key challenges in multilingual text rendering, photorealism, deployment efficiency, and developer accessibility. With only 6 billion parameters, it outperforms larger open-source models across benchmarks, showcasing efficient architecture design.
DeepSeek-Ai model with advanced reasoning capabilities and agent functions, combining high computational efficiency with GPT-5-level performance. Thanks to its Sparse Attention Architecture (DSA) and unique "in-call tool reasoning" mechanics, the model is ideally suited for building autonomous agents, ensuring a balance between speed, resource costs, and the complexity of tasks solved.