Qwen2.5-72B-Instruct

Qwen2.5-72B is the flagship open-weight model of the series, featuring 72 billion parameters, 80 layers, and a 64/8 attention head architecture, representing the pinnacle of Alibaba's open-source language model capabilities. The model supports a 128K-token context window with 8K-token generation, enabling it to analyze multiple documents and produce detailed content with exceptional accuracy.

Trained on an extended dataset of 18 trillion tokens with enhanced filtering and specialized data in mathematics and programming, Qwen2.5-72B delivers outstanding performance across a wide range of tasks. Its most remarkable feature is achieving state-of-the-art results among open-weight models while being significantly smaller than competitors. According to the technical report, the model demonstrates performance competitive with Llama-3-405B-Instruct, despite being five times smaller in size.

Distributed under the special Qwen Research License, Qwen2.5-72B is designed for projects requiring the highest quality natural language processing. The model is ideally suited for: fundamental AI research, development of cutting-edge AI products, training and fine-tuning specialized models, serving as a foundation for multimodal systems and building advanced AI agents


Announce Date: 16.09.2024
Parameters: 73B
Context: 33K
Layers: 80
Attention Type: Full Attention
Developer: Qwen
Transformers Version: 4.43.1
License: qwen

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen2.5-72B-Instruct capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen2.5-72B-Instruct

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-3.16.96.160
32,768.0
pipeline
3 $1.34 1.610 Launch
teslaa10-4.16.64.160
32,768.0
tensor
4 $1.62 3.520 Launch
teslaa2-6.32.128.160
32,768.0
pipeline
6 $1.65 3.020 Launch
teslav100-2.16.64.240
32,768.0
tensor
2 $2.22 1.140 Launch
rtx3090-3.16.96.160
32,768.0
pipeline
3 $2.29 1.610 Launch
rtxa5000-4.16.128.160.nvlink
32,768.0
tensor
4 $2.34 3.520 Launch
teslaa100-1.16.64.160
32,768.0
1 $2.37 2.830 Launch
rtx4090-3.16.96.160
32,768.0
pipeline
3 $2.83 1.610 Launch
rtx3090-4.16.64.160
32,768.0
tensor
4 $2.89 3.520 Launch
rtx5090-2.16.64.160
32,768.0
tensor
2 $2.93 1.140 Launch
rtx4090-4.16.64.160
32,768.0
tensor
4 $3.60 3.520 Launch
h100-1.16.64.160
32,768.0
1 $3.83 2.830 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 4.090 Launch
h200-1.16.128.160
32,768.0
1 $4.74 8.320 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
rtxa5000-6.24.192.160.nvlink
32,768.0
pipeline
6 $3.50 4.689 Launch
teslav100-3.64.256.320
32,768.0
pipeline
3 $3.89 1.119 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 1.439 Launch
rtx5090-3.16.96.160
32,768.0
pipeline
3 $4.34 1.119 Launch
teslav100-4.32.96.160
32,768.0
tensor
4 $4.35 3.749 Launch
teslaa100-2.24.96.160.nvlink
32,768.0
tensor
2 $4.61 7.129 Launch
rtxa5000-8.24.256.160.nvlink
32,768.0
tensor
8 $4.61 8.509 Launch
h200-1.16.128.160
32,768.0
1 $4.74 5.669 Launch
rtx5090-4.16.128.160
32,768.0
tensor
4 $5.74 3.749 Launch
rtx4090-6.44.256.160
32,768.0
pipeline
6 $5.83 4.689 Launch
rtx4090-8.44.256.160
32,768.0
tensor
8 $7.51 8.509 Launch
h100-2.24.256.160
32,768.0
tensor
2 $7.84 7.129 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa100-3.32.384.240
32,768.0
pipeline
3 $7.36 6.299 Launch
h100nvl-2.24.192.240
32,768.0
tensor
2 $8.17 1.869 Launch
rtx5090-6.44.256.240
32,768.0
pipeline
6 $8.86 1.229 Launch
teslaa100-4.16.256.240
32,768.0
tensor
4 $9.14 13.249 Launch
h200-2.24.256.240
32,768.0
tensor
2 $9.41 10.329 Launch
rtx5090-8.44.256.240
32,768.0
tensor
8 $11.55 6.489 Launch
h100-3.32.384.240
32,768.0
pipeline
3 $11.73 6.299 Launch
h100-4.16.256.240
32,768.0
tensor
4 $14.96 13.249 Launch

Related models

QwQ

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.