GLM-5 represents a significant leap forward from its predecessor GLM-4.7, released just a few months ago. The model has been scaled from 358B parameters (32B active) to 754B parameters (40B active), while the pre-training data volume has increased from 23T to 28.5T tokens.
The key architectural innovation is the integration of DeepSeek Sparse Attention (DSA) — a sparse attention mechanism that drastically reduces computational complexity while maintaining the ability to handle long contexts. DSA operates on a two-stage principle: first, a "lightning indexer" calculates the relevance of each previous token to the current query, then a top-k selection mechanism picks only the most significant tokens for attention computation. This reduces complexity from quadratic O(n²) to linear O(nk), where k is the number of selected tokens (typically 2048), significantly cutting memory costs when working with long contexts.
The second major improvement in GLM-5 is Slime, a new system for reinforcement learning. Typically, when training large language models this way, all processes run synchronously and wait for each other, creating bottlenecks that slow things down. Slime solves this by allowing different parts of the system to act independently — asynchronously — making training faster and more efficient.
GLM-5 delivers excellent results on key benchmarks, demonstrating strong capabilities in tasks requiring long-term planning, often outperforming top models like DeepSeek-V3.2 and Kimi K2.5 while approaching closed flagships such as Claude Opus 4.5 and GPT-5.2. The model can convert text or source materials directly into ready-to-use documents in .docx, .pdf, and .xlsx formats, among others. GLM-5 supports various coding agents (Claude Code, OpenCode, Kilo Code, Roo Code, Cline, Droid) and integrates with OpenClaw, turning the model into a personal assistant capable of working through apps and devices rather than just chat mode. Finally, GLM-5 has virtually no equal in front-end generation.
GLM-5 is optimized for a wide range of professional scenarios requiring deep reasoning and autonomous task execution. It is ideally suited for real-world complex practical tasks such as: software development, content creation (documents, spreadsheets, presentations), document analysis, conducting full-scale research, and of course for working in agent systems.
| Model Name | Context | Type | GPU | Status | Link |
|---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
202,752.0 pipeline |
6 | $14.10 | 3.528 | Launch | ||
202,752.0 tensor |
8 | $18.35 | 15.817 | Launch | ||
202,752.0 tensor |
4 | $19.23 | 10.654 | Launch | ||
| Name | GPU | TPS | Max Concurrency | |||
|---|---|---|---|---|---|---|
202,752.0 pipeline |
6 | $28.39 | 3.727 | Launch | ||
202,752.0 tensor |
8 | $37.37 | 25.722 | Launch | ||
There are no configurations for this model, context and quantization yet.
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.