GLM-4.5-Air embodies the principle of "efficiency and speed," designed specifically for agent applications under limited computational resources—proving, as developers claim, that a reasoning-capable model can be both fast and accurate at the same time. This compact model, with 106 billion total parameters and 12 billion active parameters, demonstrates how thoughtful architectural optimization can preserve the core capabilities of larger models while drastically reducing resource demands. Built on the same MoE architecture as its larger sibling, it is fine-tuned for rapid inference and high computational efficiency without sacrificing essential functionalities. Specialized training for agent-oriented tasks includes extensive optimization for tool usage, web browsing, software development, and frontend engineering. This enables GLM-4.5-Air to deliver superior performance in practical development tasks compared to general-purpose models of similar size.
The hybrid reasoning system in GLM-4.5-Air is adapted for high-speed, interactive applications. The model inherits the two-mode architecture of the larger version but is optimized to minimize latency in "non-thinking mode," achieving response times under one second for most queries. This makes it ideal for real-time applications such as code autocompletion, interactive debugging, and real-time documentation generation. In "thinking mode," the model remains capable of complex, multi-step reasoning, but with an optimized balance between analytical depth and execution speed.
GLM-4.5-Air’s benchmark performance is impressive for its class. Ranking 6th in the overall leaderboard across 12 key benchmarks with a score of 59.8, it outperforms many larger competitors. Particularly notable is its tool-calling accuracy of 90.6%, surpassing numerous larger proprietary models.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 131072 | 160 | 4 | $1.75 | Launch | ||
16 | 131072 | 160 | 4 | $3.23 | Launch | ||
16 | 131072 | 160 | 4 | $4.26 | Launch | ||
16 | 98304 | 160 | 3 | $4.34 | Launch | ||
24 | 262144 | 160 | 2 | $5.35 | Launch | ||
24 | 262144 | 160 | 2 | $10.40 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.