GLM-4.6V-Flash is a lightweight version of the GLM-V family's multimodal language model, with 9 billion parameters, optimized for local deployment and low-latency applications. Despite its compact size, the model retains the key capabilities of the larger 106-billion-parameter version, including a 128,000-token context window and support for Native Multimodal Function Calling — an innovation first introduced in the GLM-V series that allows passing images, screenshots, and documents directly as tool parameters without intermediate text conversion. The model's configuration enables processing approximately 150 document pages, 200 slides, or an hour of video in a single pass.
The model demonstrates state-of-the-art results among open-source models of comparable scale. On the MMBench V1.1 benchmark, the Flash version achieves a score of 86.9; on MathVista (mathematical multimodal reasoning) — 82.7; on OCRBench (text recognition in images) — 84.7; and on AI2D (scientific diagram understanding) — 89.2. The model achieves particularly impressive results in agent-based tasks: 71.8 on WebVoyager (browser navigation) and 69.8 on Design2Code (UI-to-code reproduction), outperforming significantly larger models like Qwen2.5-VL-72B in long-document understanding tasks.
Use cases for the model include: local processing of confidential documents (financial reports, medical records) with table and chart analysis; generating frontend code (precise HTML/CSS) from UI screenshots with the ability for iterative editing via text commands; and creating multimodal agents for automating tasks such as visual web search or processing mixed-media content (text + images) for social media. Thanks to its MIT license and support for inference frameworks like vLLM and SGLang, the model is ready for industrial deployment in both cloud and edge scenarios.
| Model Name | Context | Type | GPU | TPS | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
131,072.0 |
16 | 32768 | 160 | 1 | $0.53 | Launch | |
131,072.0 tensor |
16 | 32768 | 160 | 2 | $0.54 | Launch | |
131,072.0 tensor |
16 | 32768 | 160 | 2 | $0.57 | Launch | |
131,072.0 tensor |
12 | 65536 | 160 | 2 | $0.69 | Launch | |
131,072.0 |
16 | 24576 | 160 | 1 | $0.83 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $1.02 | Launch | |
131,072.0 |
12 | 65536 | 160 | 1 | $1.20 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 2 | $1.23 | Launch | |
131,072.0 pipeline |
16 | 65536 | 160 | 3 | $1.43 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $1.59 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 4 | $1.82 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $2.37 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $3.83 | Launch | |
131,072.0 |
16 | 98304 | 160 | 1 | $4.11 | Launch | |
131,072.0 |
16 | 131072 | 160 | 1 | $4.74 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
131,072.0 |
16 | 32768 | 160 | 1 | $0.53 | Launch | |
131,072.0 tensor |
16 | 32768 | 160 | 2 | $0.54 | Launch | |
131,072.0 tensor |
16 | 32768 | 160 | 2 | $0.57 | Launch | |
131,072.0 |
16 | 24576 | 160 | 1 | $0.83 | Launch | |
131,072.0 pipeline |
12 | 24576 | 120 | 3 | $0.84 | Launch | |
131,072.0 |
16 | 32768 | 160 | 1 | $1.02 | Launch | |
131,072.0 tensor |
16 | 32768 | 160 | 4 | $1.12 | Launch | |
131,072.0 |
12 | 65536 | 160 | 1 | $1.20 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 2 | $1.23 | Launch | |
131,072.0 pipeline |
16 | 65536 | 160 | 3 | $1.43 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $1.59 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 4 | $1.82 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $2.37 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $3.83 | Launch | |
131,072.0 |
16 | 98304 | 160 | 1 | $4.11 | Launch | |
131,072.0 |
16 | 131072 | 160 | 1 | $4.74 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
131,072.0 pipeline |
32 | 65536 | 160 | 3 | $0.88 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 2 | $0.93 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 4 | $0.96 | Launch | |
131,072.0 pipeline |
32 | 131072 | 160 | 3 | $1.06 | Launch | |
131,072.0 tensor |
16 | 32768 | 160 | 4 | $1.12 | Launch | |
131,072.0 |
12 | 65536 | 160 | 1 | $1.20 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 2 | $1.23 | Launch | |
131,072.0 tensor |
32 | 131072 | 160 | 4 | $1.26 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 2 | $1.56 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $1.59 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 4 | $1.82 | Launch | |
131,072.0 tensor |
16 | 65536 | 160 | 2 | $1.92 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $2.37 | Launch | |
131,072.0 |
16 | 65536 | 160 | 1 | $3.83 | Launch | |
131,072.0 |
16 | 98304 | 160 | 1 | $4.11 | Launch | |
131,072.0 |
16 | 131072 | 160 | 1 | $4.74 | Launch | |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.