GLM-4.6

reasoning

GLM-4.6 is built on a Mixture-of-Experts (MoE) architecture with a total of 355 billion parameters, of which 32 billion are actively used per forward pass. GLM-4.6 (like its 4.5 version) employs a "deeper but narrower" strategy: the model features more layers with a smaller number of experts and a smaller hidden dimension compared to DeepSeek-V3 and Kimi K2. This architecture delivers superior performance on reasoning tasks. The model uses Grouped-Query Attention with partial RoPE, 96 attention heads for a hidden dimension of 5120 in 92 layers, QK normalization to stabilize attention logits, and the Muon optimizer for accelerated convergence.

GLM-4.6 offers several significant improvements over its predecessor: an increased context window from 128K to 200K tokens, enhanced programming capabilities, advanced reasoning, and efficiency—the model completes tasks using approximately 15% fewer tokens compared to GLM-4.5.According to the official release, GLM-4.6 was tested on eight public benchmarks covering agent tasks, reasoning, and programming. The results demonstrate the model's ability to confidently compete with leading models such as DeepSeek-V3.2-Exp and Claude Sonnet 4. For example: AIME 25 (Mathematical Reasoning) -  98.6%, significantly outperforming Claude Sonnet 4 (74.3%) and DeepSeek-V3.2-Exp (89.3%), LiveCodeBench v6 (Real-World Programming) - 84.5%, substantially ahead of GLM-4.5 (63.3%) and DeepSeek-V3.2-Exp (70.1%), BrowseComp (Agent Tasks with Web Search) - 45.1%, significantly surpassing GLM-4.5 (26.4%) and DeepSeek-V3.2-Exp (40.1%). In practical programming tasks, according to an extended CC-Bench test conducted by the developers, GLM-4.6 achieves practical parity with Claude Sonnet 4, showing a 48.6%-win rate in head-to-head comparisons when performing real-world tasks in frontend development, tool creation, data analysis, testing, and algorithms.

Thanks to its unique characteristics, GLM-4.6 is optimally suited for creating autonomous AI agents, professional software development (from frontend work to refactoring legacy code), analyzing large volumes of documents, creating educational content, and, finally, scientific research.


Announce Date: 30.09.2025
Parameters: 357B
Experts: 160
Activated at inference: 32B
Context: 203K
Layers: 92
Attention Type: Full Attention
VRAM requirements: 270.0 GB using 4 bits quantization
Developer: Z.ai
Transformers Version: 4.54.0
License: MIT

Public endpoint

Use our pre-built public endpoints for free to test inference and explore GLM-4.6 capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting GLM-4.6

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.32.384.320.nvlink
202,752.0
32 393216 320 4 $10.35 Launch
teslah100-4.44.512.320
202,752.0
44 524288 320 4 $20.77 Launch
h200-3.32.512.480
202,752.0
32 524288 480 3 $21.08 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-8.44.512.480.nvlink
202,752.0
44 524288 480 8 $20.05 Launch
h200-4.32.768.480
202,752.0
32 786432 480 4 $28.19 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
h200-8.52.1024.960
202,752.0
52 1048576 960 8 $55.29 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.