GLM-4.7-Flash

reasoning

GLM-4.7-Flash is a compact model based on the Mixture of Experts (MoE) architecture, featuring 30 billion total parameters with only 4 out of 64 experts activated per token (~3.6 billion active parameters). It offers a unique balance between performance and efficiency: the model delivers results comparable to much larger LLMs while requiring only ~24 GB of VRAM for inference.The model supports a long context window of up to 200,000 input tokens and can generate responses of up to 128,000 tokens.

Unlike the full-scale GLM-4.7 (which is designed for maximum performance without resource constraints), the Flash version is specifically built for easy deployment in environments with limited computational resources—such as local servers, edge devices, or budget cloud instances. Compared to its predecessor, GLM-4.5 Air, Flash features improved expert routing algorithms and is optimized for multi-step agent-based tasks thanks to its "Preserved Thinking mode," which enables the model to perform complex sequential actions without degradation in quality.

GLM-4.7-Flash confidently outperforms other open models in its class on agentic and programming benchmarks. On τ²-Bench, which assesses a model’s ability to interact with users through multi-step reasoning and autonomous tool usage in realistic domains, it scored 79.5 points, significantly ahead of Qwen3-30B-A3B-Thinking (49.0) and GPT-OSS-20B (47.7). An even more impressive result is its score of 59.2 on SWE-bench Verified, where the model is tested on fixing real bugs in GitHub repositories; here it also surpassed both Qwen3 (22.0) and GPT-OSS-20B (34.0). Additionally, the model shows strong performance in complex reasoning: 75.2 on GPQA (natural sciences) and 91.6 on AIME 25 (olympiad-level mathematics).

Use cases naturally follow from its technical strengths. First and foremost: software development—frontend and backend tasks, code generation and debugging, working with large codebases. Second: agentic systems requiring multi-step planning and tool interaction (browser navigation, API usage, business process automation). Third: long-context document processing—legal texts, technical documentation, and literary works in Chinese and other languages. Finally, the model is well-suited for resource-constrained environments: local deployment in organizations with data privacy requirements, or use by startups with limited inference budgets. It supports popular deployment frameworks such as vLLM, SGLang, and Transformers.


Announce Date: 19.01.2026
Parameters: 31.221488576B
Experts: 64
Activated at inference: 3.6B
Context: 203K
Layers: 47
Attention Type: Full Attention
Developer: Z.ai
Transformers Version: 5.0.0rc0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-4.7-Flash capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting GLM-4.7-Flash

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-3.32.384.160
202,752.0
pipeline
32 393216 160 3 $7.35 Launch
teslaa100-4.16.256.120
202,752.0
tensor
16 262144 120 4 $9.13 Launch
h200-2.24.256.160
202,752.0
tensor
24 262144 160 2 $9.40 Launch
rtx5090-8.44.256.160
202,752.0
pipeline
44 262144 160 8 $11.54 Launch
h100-3.32.384.160
202,752.0
pipeline
32 393216 160 3 $11.72 Launch
h100nvl-3.24.384.480
202,752.0
pipeline
24 393216 480 3 $12.38 Launch
h100-4.16.256.120
202,752.0
tensor
16 262144 120 4 $14.95 Launch
h100nvl-4.32.384.480
202,752.0
tensor
32 393216 480 4 $16.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.16.256.120
202,752.0
tensor
16 262144 120 4 $9.13 Launch
h200-2.24.256.160
202,752.0
tensor
24 262144 160 2 $9.40 Launch
h100nvl-3.24.384.480
202,752.0
pipeline
24 393216 480 3 $12.38 Launch
h100-4.16.256.120
202,752.0
tensor
16 262144 120 4 $14.95 Launch
h100nvl-4.32.384.480
202,752.0
tensor
32 393216 480 4 $16.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.16.256.240
202,752.0
tensor
16 262144 240 4 $9.14 Launch
h200-2.24.256.160
202,752.0
tensor
24 262144 160 2 $9.40 Launch
h100nvl-3.24.384.480
202,752.0
pipeline
24 393216 480 3 $12.38 Launch
h100-4.16.256.240
202,752.0
tensor
16 262144 240 4 $14.96 Launch
h100nvl-4.32.384.480
202,752.0
tensor
32 393216 480 4 $16.23 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.