GLM-4.5V is a next-generation multimodal model built upon the GLM-4.5-Air architecture with 106 billion total parameters (of which 12 billion are active per token). Architecturally, it employs a hybrid approach: its text block utilizes a Mixture-of-Experts (MoE) scheme with 128 experts, of which 8 are activated per token. The core text component comprises 46 layers with 96 attention heads. The vision encoder is based on a 24-layer transformer with scalable attention structure, supporting images up to 336×336 pixels and video through spatio-temporal patch aggregation, enabling efficient analysis of long videos and complex visual scenes.
A key technical innovation of GLM-4.5V is the integration of 3D-RoPE (3D Rotary Positional Encoding) for enhanced spatial awareness, along with an advanced attention modulator (FA³) that accelerates inference and reduces memory consumption when processing large video streams. The model features a Thinking Mode, allowing users to switch between fast response generation and deep, step-by-step reasoning. This flexibility makes GLM-4.5V particularly valuable in intelligent agent scenarios and GUI automation tasks: it can "understand" interfaces and plan actions within applications, which is crucial for developing agent-based AI systems or robotic process automation.
At launch, GLM-4.5V achieves state-of-the-art performance on 41 out of 42 major benchmarks for multimodal LLMs that process images and video, including MMBench, AI2D, MMStar, MathVista, OSRBench, and others.
GLM-4.5V's capabilities span a broad range of multimodal tasks—from advanced image analysis (scene understanding, frame-by-frame analytics, spatial object recognition) to video segmentation and event detection in long videos, interpretation of complex diagrams and documents, image captioning, frontend code generation from screenshots, text extraction from application interfaces, and more. The model supports bounding box generation, precise object recognition, and flexible integration with external visual data, enabling unique solutions for e-commerce, healthcare, security, document processing, and various digital assistant applications.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 98304 | 160 | 3 | $1.34 | Launch | ||
16 | 98304 | 160 | 3 | $2.45 | Launch | ||
16 | 131072 | 160 | 1 | $2.71 | Launch | ||
16 | 98304 | 160 | 3 | $3.23 | Launch | ||
16 | 98304 | 160 | 3 | $4.34 | Launch | ||
16 | 131072 | 160 | 1 | $5.23 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.