GLM-4.7-Flash

reasoning

GLM-4.7-Flash is a compact model based on the Mixture of Experts (MoE) architecture, featuring 30 billion total parameters with only 4 out of 64 experts activated per token (~3.6 billion active parameters). It offers a unique balance between performance and efficiency: the model delivers results comparable to much larger LLMs while requiring only ~24 GB of VRAM for inference.The model supports a long context window of up to 200,000 input tokens and can generate responses of up to 128,000 tokens.

Unlike the full-scale GLM-4.7 (which is designed for maximum performance without resource constraints), the Flash version is specifically built for easy deployment in environments with limited computational resources—such as local servers, edge devices, or budget cloud instances. Compared to its predecessor, GLM-4.5 Air, Flash features improved expert routing algorithms and is optimized for multi-step agent-based tasks thanks to its "Preserved Thinking mode," which enables the model to perform complex sequential actions without degradation in quality.

GLM-4.7-Flash confidently outperforms other open models in its class on agentic and programming benchmarks. On τ²-Bench, which assesses a model’s ability to interact with users through multi-step reasoning and autonomous tool usage in realistic domains, it scored 79.5 points, significantly ahead of Qwen3-30B-A3B-Thinking (49.0) and GPT-OSS-20B (47.7). An even more impressive result is its score of 59.2 on SWE-bench Verified, where the model is tested on fixing real bugs in GitHub repositories; here it also surpassed both Qwen3 (22.0) and GPT-OSS-20B (34.0). Additionally, the model shows strong performance in complex reasoning: 75.2 on GPQA (natural sciences) and 91.6 on AIME 25 (olympiad-level mathematics).

Use cases naturally follow from its technical strengths. First and foremost: software development—frontend and backend tasks, code generation and debugging, working with large codebases. Second: agentic systems requiring multi-step planning and tool interaction (browser navigation, API usage, business process automation). Third: long-context document processing—legal texts, technical documentation, and literary works in Chinese and other languages. Finally, the model is well-suited for resource-constrained environments: local deployment in organizations with data privacy requirements, or use by startups with limited inference budgets. It supports popular deployment frameworks such as vLLM, SGLang, and Transformers.


Announce Date: 19.01.2026
Parameters: 32B
Experts: 64
Activated at inference: 4B
Context: 203K
Layers: 47
Attention Type: Multi-head Latent Attention
Developer: Z.ai
Transformers Version: 5.0.0rc0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-4.7-Flash capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting GLM-4.7-Flash

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.160
202,752.0
pipeline
3 $0.88 1.559 Launch
teslaa10-2.16.64.160
202,752.0
tensor
2 $0.93 1.804 Launch
teslat4-4.16.64.160
202,752.0
tensor
4 $0.96 2.723 Launch
teslaa2-3.32.128.160
202,752.0
pipeline
3 $1.06 1.559 Launch
rtx2080ti-4.16.64.160
202,752.0
tensor
4 $1.18 0.962 Launch
rtxa5000-2.16.64.160.nvlink
202,752.0
tensor
2 $1.23 1.804 Launch
teslaa2-4.32.128.160
202,752.0
tensor
4 $1.26 2.723 Launch
rtx3090-2.16.64.160
202,752.0
tensor
2 $1.56 1.804 Launch
rtx4090-2.16.64.160
202,752.0
tensor
2 $1.92 1.804 Launch
teslav100-2.16.64.240
202,752.0
tensor
2 $2.22 3.212 Launch
teslaa100-1.16.64.160
202,752.0
1 $2.37 4.865 Launch
rtx5090-2.16.64.160
202,752.0
tensor
2 $2.93 3.212 Launch
h100-1.16.64.160
202,752.0
1 $3.83 4.865 Launch
h100nvl-1.16.96.160
202,752.0
1 $4.11 6.097 Launch
h200-1.16.128.160
202,752.0
1 $4.74 10.235 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
202,752.0
tensor
4 $0.96 1.475 Launch
teslaa2-4.32.128.160
202,752.0
tensor
4 $1.26 1.475 Launch
teslaa10-3.16.96.160
202,752.0
pipeline
3 $1.34 2.424 Launch
teslaa10-4.12.48.160
202,752.0
tensor
4 $1.57 4.292 Launch
teslav100-2.16.64.240
202,752.0
tensor
2 $2.22 1.964 Launch
rtx3090-3.16.96.160
202,752.0
pipeline
3 $2.29 2.424 Launch
rtxa5000-4.16.128.160.nvlink
202,752.0
tensor
4 $2.34 4.292 Launch
teslaa100-1.16.64.160
202,752.0
1 $2.37 3.617 Launch
rtx4090-3.16.96.160
202,752.0
pipeline
3 $2.83 2.424 Launch
rtx3090-4.16.64.160
202,752.0
tensor
4 $2.89 4.292 Launch
rtx5090-2.16.64.160
202,752.0
tensor
2 $2.93 1.964 Launch
rtx4090-4.16.64.160
202,752.0
tensor
4 $3.60 4.292 Launch
h100-1.16.64.160
202,752.0
1 $3.83 3.617 Launch
h100nvl-1.16.96.160
202,752.0
1 $4.11 4.849 Launch
h200-1.16.128.160
202,752.0
1 $4.74 8.987 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.128.160
202,752.0
tensor
4 $1.75 1.365 Launch
rtxa5000-4.16.128.160.nvlink
202,752.0
tensor
4 $2.34 1.365 Launch
rtx3090-4.16.96.320
202,752.0
tensor
4 $2.97 1.365 Launch
rtx4090-4.16.96.320
202,752.0
tensor
4 $3.68 1.365 Launch
teslav100-3.64.256.320
202,752.0
pipeline
3 $3.89 1.610 Launch
h100nvl-1.16.96.160
202,752.0
1 $4.11 1.923 Launch
rtx5090-3.16.96.160
202,752.0
pipeline
3 $4.34 1.610 Launch
teslav100-4.32.96.160
202,752.0
tensor
4 $4.35 4.182 Launch
teslaa100-2.24.96.160.nvlink
202,752.0
tensor
2 $4.61 7.488 Launch
h200-1.16.128.160
202,752.0
1 $4.74 6.060 Launch
rtx5090-4.16.128.160
202,752.0
tensor
4 $5.74 4.182 Launch
h100-2.24.256.160
202,752.0
tensor
2 $7.84 7.488 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.