GLM-4.7-Flash

reasoning

GLM-4.7-Flash is a compact model based on the Mixture of Experts (MoE) architecture, featuring 30 billion total parameters with only 4 out of 64 experts activated per token (~3.6 billion active parameters). It offers a unique balance between performance and efficiency: the model delivers results comparable to much larger LLMs while requiring only ~24 GB of VRAM for inference.The model supports a long context window of up to 200,000 input tokens and can generate responses of up to 128,000 tokens.

Unlike the full-scale GLM-4.7 (which is designed for maximum performance without resource constraints), the Flash version is specifically built for easy deployment in environments with limited computational resources—such as local servers, edge devices, or budget cloud instances. Compared to its predecessor, GLM-4.5 Air, Flash features improved expert routing algorithms and is optimized for multi-step agent-based tasks thanks to its "Preserved Thinking mode," which enables the model to perform complex sequential actions without degradation in quality.

GLM-4.7-Flash confidently outperforms other open models in its class on agentic and programming benchmarks. On τ²-Bench, which assesses a model’s ability to interact with users through multi-step reasoning and autonomous tool usage in realistic domains, it scored 79.5 points, significantly ahead of Qwen3-30B-A3B-Thinking (49.0) and GPT-OSS-20B (47.7). An even more impressive result is its score of 59.2 on SWE-bench Verified, where the model is tested on fixing real bugs in GitHub repositories; here it also surpassed both Qwen3 (22.0) and GPT-OSS-20B (34.0). Additionally, the model shows strong performance in complex reasoning: 75.2 on GPQA (natural sciences) and 91.6 on AIME 25 (olympiad-level mathematics).

Use cases naturally follow from its technical strengths. First and foremost: software development—frontend and backend tasks, code generation and debugging, working with large codebases. Second: agentic systems requiring multi-step planning and tool interaction (browser navigation, API usage, business process automation). Third: long-context document processing—legal texts, technical documentation, and literary works in Chinese and other languages. Finally, the model is well-suited for resource-constrained environments: local deployment in organizations with data privacy requirements, or use by startups with limited inference budgets. It supports popular deployment frameworks such as vLLM, SGLang, and Transformers.


Announce Date: 19.01.2026
Parameters: 32B
Experts: 64
Activated at inference: 4B
Context: 203K
Layers: 47
Attention Type: Full Attention
Developer: Z.ai
Transformers Version: 5.0.0rc0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-4.7-Flash capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting GLM-4.7-Flash

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-6.32.128.160
202,752.0
pipeline
6 $1.65 1.136 Launch
teslaa10-4.16.128.160
202,752.0
tensor
4 $1.75 1.246 Launch
rtxa5000-4.16.128.160.nvlink
202,752.0
tensor
4 $2.34 1.246 Launch
teslaa100-1.16.128.160
202,752.0
1 $2.50 1.095 Launch
rtx3090-4.16.96.320
202,752.0
tensor
4 $2.97 1.246 Launch
rtx4090-4.16.96.320
202,752.0
tensor
4 $3.68 1.246 Launch
teslav100-3.64.256.320
202,752.0
pipeline
3 $3.89 1.302 Launch
h100-1.16.128.160
202,752.0
1 $3.95 1.095 Launch
h100nvl-1.16.96.160
202,752.0
1 $4.11 1.372 Launch
rtx5090-3.16.96.160
202,752.0
pipeline
3 $4.34 1.302 Launch
teslav100-4.32.96.160
202,752.0
tensor
4 $4.35 1.880 Launch
h200-1.16.128.160
202,752.0
1 $4.74 2.303 Launch
rtx5090-4.16.128.160
202,752.0
tensor
4 $5.74 1.880 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
202,752.0
pipeline
6 $3.50 1.806 Launch
teslav100-3.64.256.320
202,752.0
pipeline
3 $3.89 1.021 Launch
h100nvl-1.16.96.160
202,752.0
1 $4.11 1.091 Launch
rtx5090-3.16.96.160
202,752.0
pipeline
3 $4.34 1.021 Launch
teslav100-4.32.96.160
202,752.0
tensor
4 $4.35 1.599 Launch
teslaa100-2.24.96.160.nvlink
202,752.0
tensor
2 $4.61 2.343 Launch
h200-1.16.128.160
202,752.0
1 $4.74 2.022 Launch
rtx5090-4.16.128.160
202,752.0
tensor
4 $5.74 1.599 Launch
rtx4090-6.44.256.160
202,752.0
pipeline
6 $5.83 1.806 Launch
h100-2.24.256.160
202,752.0
tensor
2 $7.84 2.343 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
202,752.0
pipeline
6 $3.50 1.148 Launch
teslaa100-2.24.128.160.nvlink
202,752.0
tensor
2 $4.67 1.685 Launch
h200-1.16.128.160
202,752.0
1 $4.74 1.364 Launch
rtx4090-6.44.256.160
202,752.0
pipeline
6 $5.83 1.148 Launch
h100-2.24.256.160
202,752.0
tensor
2 $7.84 1.685 Launch
h100nvl-2.24.192.240
202,752.0
tensor
2 $8.17 2.239 Launch
rtx5090-6.44.256.160
202,752.0
pipeline
6 $8.86 2.099 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.