GLM-4.7-Flash is a compact model based on the Mixture of Experts (MoE) architecture, featuring 30 billion total parameters with only 4 out of 64 experts activated per token (~3.6 billion active parameters). It offers a unique balance between performance and efficiency: the model delivers results comparable to much larger LLMs while requiring only ~24 GB of VRAM for inference.The model supports a long context window of up to 200,000 input tokens and can generate responses of up to 128,000 tokens.
Unlike the full-scale GLM-4.7 (which is designed for maximum performance without resource constraints), the Flash version is specifically built for easy deployment in environments with limited computational resources—such as local servers, edge devices, or budget cloud instances. Compared to its predecessor, GLM-4.5 Air, Flash features improved expert routing algorithms and is optimized for multi-step agent-based tasks thanks to its "Preserved Thinking mode," which enables the model to perform complex sequential actions without degradation in quality.
GLM-4.7-Flash confidently outperforms other open models in its class on agentic and programming benchmarks. On τ²-Bench, which assesses a model’s ability to interact with users through multi-step reasoning and autonomous tool usage in realistic domains, it scored 79.5 points, significantly ahead of Qwen3-30B-A3B-Thinking (49.0) and GPT-OSS-20B (47.7). An even more impressive result is its score of 59.2 on SWE-bench Verified, where the model is tested on fixing real bugs in GitHub repositories; here it also surpassed both Qwen3 (22.0) and GPT-OSS-20B (34.0). Additionally, the model shows strong performance in complex reasoning: 75.2 on GPQA (natural sciences) and 91.6 on AIME 25 (olympiad-level mathematics).
Use cases naturally follow from its technical strengths. First and foremost: software development—frontend and backend tasks, code generation and debugging, working with large codebases. Second: agentic systems requiring multi-step planning and tool interaction (browser navigation, API usage, business process automation). Third: long-context document processing—legal texts, technical documentation, and literary works in Chinese and other languages. Finally, the model is well-suited for resource-constrained environments: local deployment in organizations with data privacy requirements, or use by startups with limited inference budgets. It supports popular deployment frameworks such as vLLM, SGLang, and Transformers.
| Model Name | Context | Type | GPU | TPS | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
202,752.0 pipeline |
32 | 393216 | 160 | 3 | $7.35 | Launch | |
202,752.0 tensor |
16 | 262144 | 120 | 4 | $9.13 | Launch | |
202,752.0 tensor |
24 | 262144 | 160 | 2 | $9.40 | Launch | |
202,752.0 pipeline |
44 | 262144 | 160 | 8 | $11.54 | Launch | |
202,752.0 pipeline |
32 | 393216 | 160 | 3 | $11.72 | Launch | |
202,752.0 pipeline |
24 | 393216 | 480 | 3 | $12.38 | Launch | |
202,752.0 tensor |
16 | 262144 | 120 | 4 | $14.95 | Launch | |
202,752.0 tensor |
32 | 393216 | 480 | 4 | $16.23 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
202,752.0 tensor |
16 | 262144 | 120 | 4 | $9.13 | Launch | |
202,752.0 tensor |
24 | 262144 | 160 | 2 | $9.40 | Launch | |
202,752.0 pipeline |
24 | 393216 | 480 | 3 | $12.38 | Launch | |
202,752.0 tensor |
16 | 262144 | 120 | 4 | $14.95 | Launch | |
202,752.0 tensor |
32 | 393216 | 480 | 4 | $16.23 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
202,752.0 tensor |
16 | 262144 | 240 | 4 | $9.14 | Launch | |
202,752.0 tensor |
24 | 262144 | 160 | 2 | $9.40 | Launch | |
202,752.0 pipeline |
24 | 393216 | 480 | 3 | $12.38 | Launch | |
202,752.0 tensor |
16 | 262144 | 240 | 4 | $14.96 | Launch | |
202,752.0 tensor |
32 | 393216 | 480 | 4 | $16.23 | Launch | |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.