Kandinsky-5.0-T2I-Lite-sft-Diffusers is a text-to-image (T2I) model with 6 billion parameters, developed for generating images based on text prompts. The model belongs to the Kandinsky 5.0 family, which includes models for generating video and images.
Kandinsky-5.0-I2I-Lite-sft-Diffusers is a image-to-image (I2I) model with 6 billion parameters, developed for modifying images based on text prompts. The model belongs to the Kandinsky 5.0 family, which includes models for generating video and images.
A compact, dialogue-oriented MoE model from the GigaChat family, with 10 billion total and 1.8 billion active parameters, optimized for high-speed inference and deployment in local or high-load production environments (commonly referred to as GigaChat 3 Lightning). In terms of understanding the Russian language, it surpasses popular 3-4B scale models while operating significantly faster.
HunyuanVideo-1.5 is a lightweight text-to-video and image-to-video generation model developed by Tencent, featuring 8.3 billion parameters while maintaining state-of-the-art visual quality and motion coherence. It is designed to run efficiently on consumer-grade GPUs, making advanced video creation accessible to developers and creators.
A 32 billion parameter rectified flow transformer designed for image generation, editing, and combination based on text instructions. It supports open-ended tasks such as text-to-image generation, single-reference editing, and multi-reference editing without requiring additional finetuning. Trained using guidance distillation to enhance efficiency, the model is optimized for research and creative applications under a non-commercial license.
This is a image-to-video (I2V) model with 19 billion parameters, ensuring high-quality generation in HD format. The model belongs to the Kandinsky 5.0 family, which includes models for video and image generation.
This is a text-to-video (T2V) model with 19 billion parameters, ensuring high-quality generation in HD format. The model belongs to the Kandinsky 5.0 family, which includes models for video and image generation.
A compact multimodal model from Baidu, built on an innovative heterogeneous Mixture-of-Experts (MoE) architecture that separates parameters for textual and visual experts. During inference, only 3 billion parameters are activated out of a total model size of 28 billion parameters. The model is an upgraded version of the base ERNIE-4.5-VL-28B-A3B, specifically optimized for multimodal reasoning tasks through a "Thinking Mode." It supports images, videos, visual grounding, and tool invocation, with a native maximum context length of 131K tokens, and stands out for its moderate computational requirements.
The largest open-source reasoning model from Moonshot AI at the time of its release, featuring a Mixture-of-Experts architecture (1 trillion parameters total, 32 billion active), capable of executing 200–300 consecutive tool calls without quality degradation while seamlessly interleaving function calls with reasoning chains. The model supports a 256K-token context window, incorporates native INT4 quantization for significantly accelerated inference with virtually no loss in accuracy, and employs Multi-Head Latent Attention (MLA) for highly efficient processing of long sequences. Kimi K2 Thinking sets new records among open-source models and outperforms leading commercial systems—including GPT-5 and Claude Sonnet 4.5—on a broad range of benchmarks.
LongCat-Video is a 13.6B-parameter foundational video generation model developed to excel in Text-to-Video, Image-to-Video, and Video-Continuation tasks. It supports efficient and high-quality generation of long videos (minutes-long) without color drifting or quality degradation, marking an initial step toward world models.
A large language model that combines powerful reasoning capabilities with robust agent skills, designed to solve complex, multi-step tasks in real-world dynamic environments. Thanks to an innovative training approach utilizing high-quality, diverse data and "interleaved thinking," the M2 effectively combines high performance on academic benchmarks with exceptional robustness and adaptability when working with unfamiliar tools and scenarios
With only 2 billion parameters, a 256K context window, and capability for edge inference, this is one of the smallest visual reasoning models specialized in multi-step reasoning for visual analysis of images and videos. This means it's almost literally capable of "thinking while looking at images." Unlike the Instruct version, this model generates detailed chains of thought before producing the final answer, which enhances accuracy but impacts processing speed.
The most compact model in the Qwen3-VL multimodal family. With 2 billion parameters and a dense architecture, it is optimized for fast conversational systems and deployment on edge devices. At the same time, the model retains and supports all the advanced capabilities of the series: high-quality comprehension of images, videos, and text, support for OCR in 32 languages, object positioning, timestamp localization, and a native context length of 256K tokens.
A powerful multimodal model with 32 billion parameters and native support for a 256K context window, delivering state-of-the-art quality in multimodal understanding. The model outperforms the previous-generation 72B parameter version on most benchmarks, as well as similarly-sized solutions from other developers, such as GPT-5 and Claude 4.
A reasoning version of the flagship 32-billion-parameter Danse model from the Qwen3-VL family, optimized for multi-step thinking and solving highly complex multimodal tasks that require deep analysis and logical inference based on visual information. It supports a native context of 256K (extendable to 1M) and achieves state-of-the-art among multimodal reasoning models of a similar size.
A Russian-language-adapted, multimodal model by Avito, based on Qwen2.5-VL-7B-Instruct with an optimized architecture. The model processes Russian-language queries twice as fast as the original and significantly outperforms it in generating ad descriptions, while retaining its general-purpose image-processing capabilities.
A Russian-language LLM developed by Avito, based on Qwen3-8B and featuring a unique hybrid tokenizer specifically adapted for Russian tokens. The model demonstrates outstanding performance on Russian-language benchmarks, particularly in mathematics and function calling, while its optimized architecture enables it to process queries 15–25% faster than the original version.
The Krea Realtime 14B model is a distilled version of the Wan 2.1 14B model (developed by Wan-AI) for text-to-video generation tasks. It was transformed into an autoregressive model using the Self-Forcing method, achieving an inference speed of 11 frames per second with 4 inference steps on a single NVIDIA B200 GPU.
An innovative VLM model for text recognition and document parsing, developed by DeepSeek.ai as part of research into the capabilities of information representation through the visual modality. The model offers a unique approach: instead of traditional text tokens, it uses visual tokens to encode information from documents, achieving text compression of 10-20 times, while reaching an OCR accuracy of 97%.
A reasoning-optimized 4B version of the Qwen3-VL model series with a 256K context window (expandable to 1M). The response generation always employs reasoning chains, enabling it to tackle complex multimodal tasks, while incurring a throughput penalty. It demonstrates performance that is only slightly inferior to Qwen3-8B-VL, despite having significantly lower hardware requirements.