GLM-4.6V-Flash

reasoning
multimodal

GLM-4.6V-Flash is a lightweight version of the GLM-V family's multimodal language model, with 9 billion parameters, optimized for local deployment and low-latency applications. Despite its compact size, the model retains the key capabilities of the larger 106-billion-parameter version, including a 128,000-token context window and support for Native Multimodal Function Calling — an innovation first introduced in the GLM-V series that allows passing images, screenshots, and documents directly as tool parameters without intermediate text conversion. The model's configuration enables processing approximately 150 document pages, 200 slides, or an hour of video in a single pass.

The model demonstrates state-of-the-art results among open-source models of comparable scale. On the MMBench V1.1 benchmark, the Flash version achieves a score of 86.9; on MathVista (mathematical multimodal reasoning) — 82.7; on OCRBench (text recognition in images) — 84.7; and on AI2D (scientific diagram understanding) — 89.2. The model achieves particularly impressive results in agent-based tasks: 71.8 on WebVoyager (browser navigation) and 69.8 on Design2Code (UI-to-code reproduction), outperforming significantly larger models like Qwen2.5-VL-72B in long-document understanding tasks.

Use cases for the model include: local processing of confidential documents (financial reports, medical records) with table and chart analysis; generating frontend code (precise HTML/CSS) from UI screenshots with the ability for iterative editing via text commands; and creating multimodal agents for automating tasks such as visual web search or processing mixed-media content (text + images) for social media. Thanks to its MIT license and support for inference frameworks like vLLM and SGLang, the model is ready for industrial deployment in both cloud and edge scenarios.


Announce Date: 07.12.2025
Parameters: 10.292777472B
Context: 131K
Layers: 40
Attention Type: Full Attention
Developer: Z.ai
Transformers Version: 5.0.0rc0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-4.6V-Flash capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting GLM-4.6V-Flash

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-1.16.32.160
131,072.0
16 32768 160 1 $0.53 Launch
teslat4-2.16.32.160
131,072.0
tensor
16 32768 160 2 $0.54 Launch
teslaa2-2.16.32.160
131,072.0
tensor
16 32768 160 2 $0.57 Launch
rtx2080ti-2.12.64.160
131,072.0
tensor
12 65536 160 2 $0.69 Launch
rtx3090-1.16.24.160
131,072.0
16 24576 160 1 $0.83 Launch
rtx4090-1.16.32.160
131,072.0
16 32768 160 1 $1.02 Launch
teslav100-1.12.64.160
131,072.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
16 65536 160 2 $1.23 Launch
rtx3080-3.16.64.160
131,072.0
pipeline
16 65536 160 3 $1.43 Launch
rtx5090-1.16.64.160
131,072.0
16 65536 160 1 $1.59 Launch
rtx3080-4.16.64.160
131,072.0
tensor
16 65536 160 4 $1.82 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.37 Launch
h100-1.16.64.160
131,072.0
16 65536 160 1 $3.83 Launch
h100nvl-1.16.96.160
131,072.0
16 98304 160 1 $4.11 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $4.74 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-1.16.32.160
131,072.0
16 32768 160 1 $0.53 Launch
teslat4-2.16.32.160
131,072.0
tensor
16 32768 160 2 $0.54 Launch
teslaa2-2.16.32.160
131,072.0
tensor
16 32768 160 2 $0.57 Launch
rtx3090-1.16.24.160
131,072.0
16 24576 160 1 $0.83 Launch
rtx2080ti-3.12.24.120
131,072.0
pipeline
12 24576 120 3 $0.84 Launch
rtx4090-1.16.32.160
131,072.0
16 32768 160 1 $1.02 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
16 32768 160 4 $1.12 Launch
teslav100-1.12.64.160
131,072.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
16 65536 160 2 $1.23 Launch
rtx3080-3.16.64.160
131,072.0
pipeline
16 65536 160 3 $1.43 Launch
rtx5090-1.16.64.160
131,072.0
16 65536 160 1 $1.59 Launch
rtx3080-4.16.64.160
131,072.0
tensor
16 65536 160 4 $1.82 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.37 Launch
h100-1.16.64.160
131,072.0
16 65536 160 1 $3.83 Launch
h100nvl-1.16.96.160
131,072.0
16 98304 160 1 $4.11 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $4.74 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-3.32.64.160
131,072.0
pipeline
32 65536 160 3 $0.88 Launch
teslaa10-2.16.64.160
131,072.0
tensor
16 65536 160 2 $0.93 Launch
teslat4-4.16.64.160
131,072.0
tensor
16 65536 160 4 $0.96 Launch
teslaa2-3.32.128.160
131,072.0
pipeline
32 131072 160 3 $1.06 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
16 32768 160 4 $1.12 Launch
teslav100-1.12.64.160
131,072.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
16 65536 160 2 $1.23 Launch
teslaa2-4.32.128.160
131,072.0
tensor
32 131072 160 4 $1.26 Launch
rtx3090-2.16.64.160
131,072.0
tensor
16 65536 160 2 $1.56 Launch
rtx5090-1.16.64.160
131,072.0
16 65536 160 1 $1.59 Launch
rtx3080-4.16.64.160
131,072.0
tensor
16 65536 160 4 $1.82 Launch
rtx4090-2.16.64.160
131,072.0
tensor
16 65536 160 2 $1.92 Launch
teslaa100-1.16.64.160
131,072.0
16 65536 160 1 $2.37 Launch
h100-1.16.64.160
131,072.0
16 65536 160 1 $3.83 Launch
h100nvl-1.16.96.160
131,072.0
16 98304 160 1 $4.11 Launch
h200-1.16.128.160
131,072.0
16 131072 160 1 $4.74 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.