GLM-4.5V

reasoning
multimodal

GLM-4.5V is a next-generation multimodal model built upon the GLM-4.5-Air architecture with 106 billion total parameters (of which 12 billion are active per token). Architecturally, it employs a hybrid approach: its text block utilizes a Mixture-of-Experts (MoE) scheme with 128 experts, of which 8 are activated per token. The core text component comprises 46 layers with 96 attention heads. The vision encoder is based on a 24-layer transformer with scalable attention structure, supporting images up to 336×336 pixels and video through spatio-temporal patch aggregation, enabling efficient analysis of long videos and complex visual scenes.

A key technical innovation of GLM-4.5V is the integration of 3D-RoPE (3D Rotary Positional Encoding) for enhanced spatial awareness, along with an advanced attention modulator (FA³) that accelerates inference and reduces memory consumption when processing large video streams. The model features a Thinking Mode, allowing users to switch between fast response generation and deep, step-by-step reasoning. This flexibility makes GLM-4.5V particularly valuable in intelligent agent scenarios and GUI automation tasks: it can "understand" interfaces and plan actions within applications, which is crucial for developing agent-based AI systems or robotic process automation.

At launch, GLM-4.5V achieves state-of-the-art performance on 41 out of 42 major benchmarks for multimodal LLMs that process images and video, including MMBench, AI2D, MMStar, MathVista, OSRBench, and others.

GLM-4.5V's capabilities span a broad range of multimodal tasks—from advanced image analysis (scene understanding, frame-by-frame analytics, spatial object recognition) to video segmentation and event detection in long videos, interpretation of complex diagrams and documents, image captioning, frontend code generation from screenshots, text extraction from application interfaces, and more. The model supports bounding box generation, precise object recognition, and flexible integration with external visual data, enabling unique solutions for e-commerce, healthcare, security, document processing, and various digital assistant applications.


Announce Date: 11.08.2025
Parameters: 108B
Experts: 128
Activated: 12B
Context: 66K
Attention Type: Full Attention
VRAM requirements: 61.8 GB using 4 bits quantization
Developer: Z.ai
Transformers Version: 4.55.0.dev0
License: MIT

Public endpoint

Use our pre-built public endpoints to test inference and explore GLM-4.5V capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting GLM-4.5V

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-3.16.96.160 16 98304 160 3 $1.34 Launch
rtx3090-3.16.96.160 16 98304 160 3 $2.45 Launch
teslaa100-1.16.128.160 16 131072 160 1 $2.71 Launch
rtx4090-3.16.96.160 16 98304 160 3 $3.23 Launch
rtx5090-3.16.96.160 16 98304 160 3 $4.34 Launch
teslah100-1.16.128.160 16 131072 160 1 $5.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.256.240 24 262144 240 2 $5.36 Launch
rtx5090-4.16.128.320 16 131072 320 4 $5.76 Launch
teslah100-2.24.256.240 24 262144 240 2 $10.41 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-3.32.384.320 32 393216 320 3 $8.01 Launch
teslah100-3.32.384.320 32 393216 320 3 $15.58 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.