GLM-5.1

reasoning
coding

GLM 5.1 is a new-generation flagship model designed for agentic engineering and long-chain reasoning. It builds upon a scaled-up version of its predecessor's architecture: the model utilizes a Mixture-of-Experts (MoE) architecture with 744B total parameters and 40B active parameters per token (top-8 out of 256 experts), ensuring high inference efficiency. The key improvement in the fifth series is the integration of DeepSeek Sparse Attention (DSA)—a sparse attention mechanism that significantly reduces deployment costs while maintaining the ability to handle very long contexts. The pre-training volume has been increased from 23 to 28.5 trillion tokens, and for post-training fine-tuning, the authors developed an asynchronous RL infrastructure called "slime," which dramatically increases throughput and enables more granular training iterations.

The main distinction of GLM 5.1 from most large language models (including GLM 5) is its ability to maintain effectiveness across hundreds and thousands of iterations. While previous models quickly exhaust their repertoire of techniques and plateau, GLM 5.1 demonstrates consistent quality improvement as operational time increases. The model doesn't just produce an initial solution; it systematically breaks down complex problems into stages, runs experiments, analyzes results, identifies bottlenecks, and purposefully eliminates them. In one experiment involving a vector database optimization task, GLM 5.1 continued to find improvements for over 600 iterations and 6000+ tool calls, ultimately increasing performance to 21.5k QPS—approximately 6 times higher than the best result achieved in a single-pass mode. This "endurance" makes GLM 5.1 an ideal tool for tasks where success is determined not by the first answer, but by long-term autonomous work.

GLM 5.1 demonstrates leading results in several benchmarks that validate its engineering and agentic capabilities. The developers compare their model not only to open-source but also to the best proprietary solutions. On SWE-Bench Pro—a benchmark for evaluating complex software engineering problem-solving—the model achieves 58.4%, setting a new quality standard. On NL2Repo (repository generation from description), it scores 42.7%, surpassing GLM 5 (35.9%) and many competing systems. On Terminal Bench 2.0, which measures the ability to perform real-world tasks in terminal systems, the result is 63.5% (outperforming all open models), significantly higher than GLM 5's 56.2%. On the CyberGym benchmark (testing cybersecurity skills), the model scores 68.7%—the best result at the time of release.

The model is intended for a wide range of tasks requiring long-term autonomous operation. It excels at code writing and refactoring, system performance optimization, creating full-fledged web applications, and automating complex engineering workflows. Thanks to its built-in support for long contexts and efficient tool use, GLM 5.1 is also suitable for research projects requiring repeated calls to external APIs, databases, or file systems. Developers can use GLM 5.1 as an intelligent core for autonomous agents capable of independently solving complex tasks. The model integrates well into frameworks like Claude Code and demonstrates impressive results when working with dozens of tool calls in a single session. The model is available under the MIT license, provided by the authors in BF16 and FP8 formats, and is supported by popular frameworks (vLLM, SGLang, xLLM, Ktransformers).


Announce Date: 03.04.2026
Parameters: 754B
Experts: 256
Activated at inference: 40B
Context: 203K
Layers: 78
Attention Type: DeepSeek Sparse Attention
Developer: Z.ai
Transformers Version: 5.4.0
vLLM Version: glm51
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-5.1 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting GLM-5.1

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-6.44.512.960.nvlink
131,072.0
pipeline
6 $14.15 0.226 Launch
teslaa100-8.44.704.960.nvlink
202,752.0
tensor
8 $18.78 1.028 Launch
h200-4.32.768.640
202,752.0
tensor
4 $19.25 1.285 Launch
h200-4.32.768.640.nvlink
202,752.0
tensor
4 $19.25 1.285 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
h200-6.52.896.960
202,752.0
pipeline
6 $28.39 0.372 Launch
h200-8.52.1024.960
202,752.0
tensor
8 $37.37 1.923 Launch
h200-8.52.1024.960.nvlink
202,752.0
tensor
8 $37.37 1.923 Launch
There are no configurations for this model, context and quantization yet.

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.