Qwen2.5-72B

Qwen2.5-72B is the flagship open-weight model of the series, featuring 72 billion parameters, 80 layers, and a 64/8 attention head architecture, representing the pinnacle of Alibaba's open-source language model capabilities. The model supports a 128K-token context window with 8K-token generation, enabling it to analyze multiple documents and produce detailed content with exceptional accuracy.

Trained on an extended dataset of 18 trillion tokens with enhanced filtering and specialized data in mathematics and programming, Qwen2.5-72B delivers outstanding performance across a wide range of tasks. Its most remarkable feature is achieving state-of-the-art results among open-weight models while being significantly smaller than competitors. According to the technical report, the model demonstrates performance competitive with Llama-3-405B-Instruct, despite being five times smaller in size.

Distributed under the special Qwen Research License, Qwen2.5-72B is designed for projects requiring the highest quality natural language processing. The model is ideally suited for: fundamental AI research, development of cutting-edge AI products, training and fine-tuning specialized models, serving as a foundation for multimodal systems and building advanced AI agents


Announce Date: 19.09.2024
Parameters: 72B
Context: 131K
Attention Type: Full Attention
VRAM requirements: 73.5 GB using 4 bits quantization
Developer: Alibaba
Transformers Version: 4.43.1
License: qwen

Public endpoint

Use our pre-built public endpoints to test inference and explore Qwen2.5-72B capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen2.5-72B

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-4.16.128.160 16 131072 160 4 $1.75 Launch
rtx3090-4.16.128.160 16 131072 160 4 $3.23 Launch
rtx4090-4.16.128.160 16 131072 160 4 $4.26 Launch
rtx5090-3.16.96.160 16 98304 160 3 $4.34 Launch
teslaa100-2.24.256.160 24 262144 160 2 $5.35 Launch
teslah100-2.24.256.160 24 262144 160 2 $10.40 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.256.160 24 262144 160 2 $5.35 Launch
rtx5090-4.16.128.160 16 131072 160 4 $5.74 Launch
teslah100-2.24.256.160 24 262144 160 2 $10.40 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-3.32.384.240 32 393216 240 3 $8.00 Launch
rtx4090-8.44.256.240 44 262144 240 8 $8.59 Launch
rtx5090-6.44.256.240 44 262144 240 6 $8.86 Launch
teslah100-3.32.384.240 32 393216 240 3 $15.58 Launch

Related models

QwQ

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.