GLM-4.7 is a language model representing a significant step forward in the development of the GLM series by Z.ai, with a focus on being an intelligent agent for programming and solving complex tasks. Its architecture, inherited from GLM-4.5, follows the ARC concept (Agentic, Reasoning, and Coding). Technically, it is a MoE (Mixture of Experts) model with 358 billion parameters, 92 hidden layers, and support for a context length of up to 202,752 tokens, where only 8 out of 160 available "experts" are activated to process each token. This architecture allows for efficient distribution of computational resources, delivering high performance without excessive resource consumption.
The uniqueness of GLM-4.7 and its main advantage over previous versions lies in its deeply optimized agentic capabilities, particularly in the field of programming. An innovation is its multi-level "Thinking" system, which includes Interleaved Thinking (reflection before each action), Preserved Thinking (retaining chains of reasoning between queries for complex tasks), and Turn-level Thinking (allowing control over the depth of analysis for each individual query). These capabilities make GLM-4.7 especially effective in long-term agent scenarios requiring sequence and consistency of actions, as well as in tasks demanding precise adherence to instructions. Additionally, the model has made a leap in "Vibe Coding," learning to generate visually appealing and modern web pages, slides, graphic elements, and more based on user descriptions.
On key benchmarks, GLM-4.7 demonstrates competitive, and often leading, results. In the area of Reasoning, it scores 97.1% on the challenging mathematical test HMMT Feb. 2025, securing second place, directly after Gemini 3.0 Pro. As an agent for Tool Using, the model scores 87.4% on the τ²-Bench, which evaluates the ability to perform multi-step tasks in environments like an online store or booking service, outperforming GPT-5-High (82.4%) and Claude Sonnet 4.5 (87.2%). On the LiveCodeBench-v6 test (84.9%), which assesses solving current programming tasks, it also shows performance at the level of the best models.
The use cases for GLM-4.7 cover a wide range of tasks requiring advanced programming capabilities and agentic interaction. The model is ideally suited for integration into coding agents (Claude Code, Kilo Code, Roo Code, Cline), where its ability to generate high-quality code in various languages, work with the terminal, and maintain context in multi-step tasks provides a significant advantage. GLM-4.7 is also effective in creating user interfaces (Vibe Coding), generating web pages and presentations, as well as in scenarios requiring complex thinking using external tools, such as web browsing, data analysis, and mathematical computations.
| Model Name | Context | Type | GPU | TPS | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
202,752.0 tensor |
32 | 393216 | 320 | 4 | $9.50 | Launch | |
202,752.0 tensor |
32 | 524288 | 480 | 3 | $14.36 | Launch | |
202,752.0 tensor |
44 | 524288 | 320 | 4 | $15.65 | Launch | |
202,752.0 tensor |
32 | 393216 | 480 | 4 | $16.23 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
202,752.0 tensor |
44 | 524288 | 480 | 8 | $18.35 | Launch | |
202,752.0 tensor |
32 | 786432 | 480 | 4 | $19.23 | Launch | |
| Name | vCPU | RAM, MB | Disk, GB | GPU | |||
|---|---|---|---|---|---|---|---|
202,752.0 tensor |
52 | 1048576 | 960 | 8 | $37.37 | Launch | |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.