GLM-4-32B-0414

GLM-4-32B-Base-0414 is a base model with 32 billion parameters from the new GLM-4–0414 series developed by Team GLM. It was trained on 15TB of high-quality data, including a significant amount of synthetic materials focused on logical reasoning. This training enables a solid foundation for subsequent reinforcement learning and adaptation to user preferences.

The model was developed under the "all tools" concept, allowing it to efficiently interact with external resources such as Python, web search, user APIs, and other services. Thanks to this capability, it demonstrates excellent performance in handling complex agent-like tasks, including code generation, function calling, information retrieval, and report creation.

Its performance is comparable to industry leaders like GPT-4o and DeepSeek-V3-0324 (671B), especially in programming tasks. The model is capable of generating over 500 lines of functional code in various programming languages without additional prompts. It supports a context length of up to 128K tokens (using YaRN, with a base of 32K) and offers convenient local deployment, making it a versatile solution for enterprise applications where result predictability and stability are critical.


Announce Date: 14.04.2025
Parameters: 32B
Context: 32K
Attention Type: Full Attention
VRAM requirements: 16.8 GB using 4 bits quantization
Developer: Z.ai
Transformers Version: 4.52.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore GLM-4-32B-0414 capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting GLM-4-32B-0414

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
rtx2080ti-2.12.64.160 12 65536 160 2 $0.69 Launch
teslat4-2.16.32.160 16 32768 160 2 $0.80 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx3080-2.16.32.160 16 32762 160 2 $0.97 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-4.16.64.160 16 65536 160 4 $1.18 Launch
teslat4-4.16.64.160 16 65536 160 4 $1.48 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx3080-4.16.64.160 16 65536 160 4 $1.82 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslav100-2.16.64.240 16 65535 240 2 $2.22 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160 16 65536 160 2 $2.93 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-3.16.96.160 16 98304 160 3 $1.34 Launch
rtx3090-3.16.96.160 16 98304 160 3 $2.45 Launch
teslaa100-1.16.128.160 16 131072 160 1 $2.71 Launch
rtx4090-3.16.96.160 16 98304 160 3 $3.23 Launch
rtx5090-3.16.96.160 16 98304 160 3 $4.34 Launch
teslah100-1.16.128.160 16 131072 160 1 $5.23 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.