GLM-4.5-Air

reasoning

GLM-4.5-Air embodies the principle of "efficiency and speed," designed specifically for agent applications under limited computational resources—proving, as developers claim, that a reasoning-capable model can be both fast and accurate at the same time. This compact model, with 106 billion total parameters and 12 billion active parameters, demonstrates how thoughtful architectural optimization can preserve the core capabilities of larger models while drastically reducing resource demands. Built on the same MoE architecture as its larger sibling, it is fine-tuned for rapid inference and high computational efficiency without sacrificing essential functionalities. Specialized training for agent-oriented tasks includes extensive optimization for tool usage, web browsing, software development, and frontend engineering. This enables GLM-4.5-Air to deliver superior performance in practical development tasks compared to general-purpose models of similar size.

The hybrid reasoning system in GLM-4.5-Air is adapted for high-speed, interactive applications. The model inherits the two-mode architecture of the larger version but is optimized to minimize latency in "non-thinking mode," achieving response times under one second for most queries. This makes it ideal for real-time applications such as code autocompletion, interactive debugging, and real-time documentation generation. In "thinking mode," the model remains capable of complex, multi-step reasoning, but with an optimized balance between analytical depth and execution speed.

GLM-4.5-Air’s benchmark performance is impressive for its class. Ranking 6th in the overall leaderboard across 12 key benchmarks with a score of 59.8, it outperforms many larger competitors. Particularly notable is its tool-calling accuracy of 90.6%, surpassing numerous larger proprietary models.


Announce Date: 28.07.2025
Parameters: 110B
Experts: 128
Activated: 12B
Context: 131K
Attention Type: Full Attention
VRAM requirements: 74.2 GB using 4 bits quantization
Developer: Z.ai
Transformers Version: 4.54.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore GLM-4.5-Air capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting GLM-4.5-Air

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-4.16.128.160 16 131072 160 4 $1.75 Launch
rtx3090-4.16.128.160 16 131072 160 4 $3.23 Launch
rtx4090-4.16.128.160 16 131072 160 4 $4.26 Launch
rtx5090-3.16.96.160 16 98304 160 3 $4.34 Launch
teslaa100-2.24.256.160 24 262144 160 2 $5.35 Launch
teslah100-2.24.256.160 24 262144 160 2 $10.40 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.256.240 24 262144 240 2 $5.36 Launch
teslah100-2.24.256.240 24 262144 240 2 $10.41 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.44.512.320 44 524288 320 4 $10.68 Launch
teslah100-4.44.512.320 44 524288 320 4 $20.77 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.