GLM-5

reasoning

GLM-5 represents a significant leap forward from its predecessor GLM-4.7, released just a few months ago. The model has been scaled from 358B parameters (32B active) to 754B parameters (40B active), while the pre-training data volume has increased from 23T to 28.5T tokens.

The key architectural innovation is the integration of DeepSeek Sparse Attention (DSA) — a sparse attention mechanism that drastically reduces computational complexity while maintaining the ability to handle long contexts. DSA operates on a two-stage principle: first, a "lightning indexer" calculates the relevance of each previous token to the current query, then a top-k selection mechanism picks only the most significant tokens for attention computation. This reduces complexity from quadratic O(n²) to linear O(nk), where k is the number of selected tokens (typically 2048), significantly cutting memory costs when working with long contexts.

The second major improvement in GLM-5 is Slime, a new system for reinforcement learning. Typically, when training large language models this way, all processes run synchronously and wait for each other, creating bottlenecks that slow things down. Slime solves this by allowing different parts of the system to act independently — asynchronously — making training faster and more efficient.

GLM-5 delivers excellent results on key benchmarks, demonstrating strong capabilities in tasks requiring long-term planning, often outperforming top models like DeepSeek-V3.2 and Kimi K2.5 while approaching closed flagships such as Claude Opus 4.5 and GPT-5.2. The model can convert text or source materials directly into ready-to-use documents in .docx, .pdf, and .xlsx formats, among others. GLM-5 supports various coding agents (Claude Code, OpenCode, Kilo Code, Roo Code, Cline, Droid) and integrates with OpenClaw, turning the model into a personal assistant capable of working through apps and devices rather than just chat mode. Finally, GLM-5 has virtually no equal in front-end generation.

GLM-5 is optimized for a wide range of professional scenarios requiring deep reasoning and autonomous task execution. It is ideally suited for real-world complex practical tasks such as: software development, content creation (documents, spreadsheets, presentations), document analysis, conducting full-scale research, and of course for working in agent systems.


Announce Date: 11.02.2026
Parameters: 754B
Experts: 256
Activated at inference: 40B
Context: 203K
Layers: 78
Attention Type: DeepSeek Sparse Attention
Developer: Z.ai
Transformers Version: 5.0.2.dev0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-5 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting GLM-5

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-6.44.512.480.nvlink
202,752.0
pipeline
6 $14.10 3.528 Launch
teslaa100-8.44.512.480.nvlink
202,752.0
tensor
8 $18.35 15.817 Launch
h200-4.32.768.480
202,752.0
tensor
4 $19.23 10.654 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
h200-6.52.896.960
202,752.0
pipeline
6 $28.39 3.727 Launch
h200-8.52.1024.960
202,752.0
tensor
8 $37.37 25.722 Launch
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.