GLM-4.6V-Flash

reasoning
multimodal

GLM-4.6V-Flash is a lightweight version of the GLM-V family's multimodal language model, with 9 billion parameters, optimized for local deployment and low-latency applications. Despite its compact size, the model retains the key capabilities of the larger 106-billion-parameter version, including a 128,000-token context window and support for Native Multimodal Function Calling — an innovation first introduced in the GLM-V series that allows passing images, screenshots, and documents directly as tool parameters without intermediate text conversion. The model's configuration enables processing approximately 150 document pages, 200 slides, or an hour of video in a single pass.

The model demonstrates state-of-the-art results among open-source models of comparable scale. On the MMBench V1.1 benchmark, the Flash version achieves a score of 86.9; on MathVista (mathematical multimodal reasoning) — 82.7; on OCRBench (text recognition in images) — 84.7; and on AI2D (scientific diagram understanding) — 89.2. The model achieves particularly impressive results in agent-based tasks: 71.8 on WebVoyager (browser navigation) and 69.8 on Design2Code (UI-to-code reproduction), outperforming significantly larger models like Qwen2.5-VL-72B in long-document understanding tasks.

Use cases for the model include: local processing of confidential documents (financial reports, medical records) with table and chart analysis; generating frontend code (precise HTML/CSS) from UI screenshots with the ability for iterative editing via text commands; and creating multimodal agents for automating tasks such as visual web search or processing mixed-media content (text + images) for social media. Thanks to its MIT license and support for inference frameworks like vLLM and SGLang, the model is ready for industrial deployment in both cloud and edge scenarios.


Announce Date: 07.12.2025
Parameters: 11B
Context: 132K
Layers: 40
Attention Type: Full Attention
Developer: Z.ai
Transformers Version: 5.0.0rc0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-4.6V-Flash capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting GLM-4.6V-Flash

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
131,072.0
1 $0.53 2.048 Launch
teslat4-2.16.32.160
131,072.0
tensor
2 $0.54 2.988 Launch
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 2.988 Launch
rtx2080ti-2.12.64.160
131,072.0
tensor
2 $0.69 1.188 Launch
rtx3090-1.16.24.160
131,072.0
1 $0.83 2.048 Launch
rtx4090-1.16.32.160
131,072.0
1 $1.02 2.048 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 3.488 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 5.868 Launch
rtx3080-3.16.64.160
131,072.0
pipeline
3 $1.43 2.128 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 3.488 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 3.428 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 12.128 Launch
h100-1.16.64.160
131,072.0
1 $3.83 12.128 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 14.648 Launch
h200-1.16.128.160
131,072.0
1 $4.74 23.108 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-1.16.32.160
131,072.0
1 $0.53 1.232 Launch
teslat4-2.16.32.160
131,072.0
tensor
2 $0.54 2.172 Launch
teslaa2-2.16.32.160
131,072.0
tensor
2 $0.57 2.172 Launch
rtx3090-1.16.24.160
131,072.0
1 $0.83 1.232 Launch
rtx2080ti-3.12.24.120
131,072.0
pipeline
3 $0.84 1.852 Launch
rtx4090-1.16.32.160
131,072.0
1 $1.02 1.232 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
4 $1.12 3.332 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 2.672 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 5.052 Launch
rtx3080-3.16.64.160
131,072.0
pipeline
3 $1.43 1.312 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 2.672 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 2.612 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 11.312 Launch
h100-1.16.64.160
131,072.0
1 $3.83 11.312 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 13.832 Launch
h200-1.16.128.160
131,072.0
1 $4.74 22.292 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
131,072.0
pipeline
3 $0.88 3.022 Launch
teslaa10-2.16.64.160
131,072.0
tensor
2 $0.93 3.522 Launch
teslat4-4.16.64.160
131,072.0
tensor
4 $0.96 5.402 Launch
teslaa2-3.32.128.160
131,072.0
pipeline
3 $1.06 3.022 Launch
rtx2080ti-4.16.32.160
131,072.0
tensor
4 $1.12 1.802 Launch
teslav100-1.12.64.160
131,072.0
1 $1.20 1.142 Launch
rtxa5000-2.16.64.160.nvlink
131,072.0
tensor
2 $1.23 3.522 Launch
teslaa2-4.32.128.160
131,072.0
tensor
4 $1.26 5.402 Launch
rtx3090-2.16.64.160
131,072.0
tensor
2 $1.56 3.522 Launch
rtx5090-1.16.64.160
131,072.0
1 $1.59 1.142 Launch
rtx3080-4.16.64.160
131,072.0
tensor
4 $1.82 1.082 Launch
rtx4090-2.16.64.160
131,072.0
tensor
2 $1.92 3.522 Launch
teslaa100-1.16.64.160
131,072.0
1 $2.37 9.782 Launch
h100-1.16.64.160
131,072.0
1 $3.83 9.782 Launch
h100nvl-1.16.96.160
131,072.0
1 $4.11 12.302 Launch
h200-1.16.128.160
131,072.0
1 $4.74 20.762 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.