Qwen2.5-0.5B-Instruct

Qwen2.5-0.5B is the smallest model in the series, featuring 500 million parameters. It supports a 32K-token context window and can generate up to 8K tokens at once. The key advantages of this model are its exceptionally low power consumption and high processing speed while maintaining reasonable output quality. Optimized for deployment on resource-constrained devices—including mobile devices, IoT systems, and edge computing—it delivers solid performance in language understanding and basic reasoning tasks thanks to its high-quality pretraining on 18 trillion tokens.

Qwen2.5-0.5B is ideal for integration into applications requiring fast responses with limited computational power, such as website chatbots, mobile assistants, autocompletion systems, and basic text processing. It also serves as an excellent base model for fine-tuning specific tasks with minimal training and inference costs.


Announce Date: 16.09.2024
Parameters: 495M
Context: 33K
Layers: 24
Attention Type: Full Attention
Developer: Qwen
Transformers Version: 4.43.1
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen2.5-0.5B-Instruct capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen2.5-0.5B-Instruct

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
32,768.0
1 $0.39 30.543 Launch
teslat4-1.16.64.160
32,768.0
1 $0.42 30.543 Launch
rtx2080ti-2.16.64.160
32,768.0
tensor
2 $0.71 38.277 Launch
teslaa10-2.16.64.160
32,768.0
tensor
2 $0.93 100.677 Launch
rtx3080-2.16.64.160
32,768.0
tensor
2 $1.03 33.477 Launch
rtx4090-1.32.64.160
32,768.0
1 $1.18 49.743 Launch
rtxa5000-2.16.64.160.nvlink
32,768.0
tensor
2 $1.23 100.677 Launch
rtx3090-2.16.64.160
32,768.0
tensor
2 $1.56 100.677 Launch
rtx5090-1.32.64.160
32,768.0
1 $1.69 68.943 Launch
teslaa10-4.16.128.160
32,768.0
pipeline
4 $1.75 202.543 Launch
teslaa100-1.16.64.160
32,768.0
1 $2.37 184.143 Launch
rtx3090-4.16.128.160
32,768.0
pipeline
4 $3.01 202.543 Launch
h100-1.16.64.160
32,768.0
1 $3.83 184.143 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 217.743 Launch
teslaa100-2.24.256.160.nvlink
32,768.0
tensor
2 $4.93 369.477 Launch
h200-2.24.256.160.nvlink
32,768.0
tensor
2 $9.40 662.277 Launch
h200-2.32.384.320
32,768.0
tensor
2 $9.72 662.277 Launch
h200-4.32.768.480
32,768.0
pipeline
4 $19.23 1,325.743 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
32,768.0
1 $0.39 30.506 Launch
teslat4-1.16.64.160
32,768.0
1 $0.42 30.506 Launch
rtx2080ti-2.16.64.160
32,768.0
tensor
2 $0.71 38.240 Launch
teslaa10-2.16.64.160
32,768.0
tensor
2 $0.93 100.640 Launch
rtx3080-2.16.64.160
32,768.0
tensor
2 $1.03 33.440 Launch
rtx4090-1.32.64.160
32,768.0
1 $1.18 49.706 Launch
rtxa5000-2.16.64.160.nvlink
32,768.0
tensor
2 $1.23 100.640 Launch
rtx3090-2.16.64.160
32,768.0
tensor
2 $1.56 100.640 Launch
rtx5090-1.32.64.160
32,768.0
1 $1.69 68.906 Launch
teslaa10-4.16.128.160
32,768.0
pipeline
4 $1.75 202.506 Launch
teslaa100-1.16.64.160
32,768.0
1 $2.37 184.106 Launch
rtx3090-4.16.128.160
32,768.0
pipeline
4 $3.01 202.506 Launch
h100-1.16.64.160
32,768.0
1 $3.83 184.106 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 217.706 Launch
teslaa100-2.24.256.160.nvlink
32,768.0
tensor
2 $4.93 369.440 Launch
h200-2.24.256.160.nvlink
32,768.0
tensor
2 $9.40 662.240 Launch
h200-2.32.384.320
32,768.0
tensor
2 $9.72 662.240 Launch
h200-4.32.768.480
32,768.0
pipeline
4 $19.23 1,325.706 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-1.16.32.240
32,768.0
1 $0.39 29.067 Launch
teslat4-1.16.64.160
32,768.0
1 $0.42 29.067 Launch
rtx2080ti-2.16.64.160
32,768.0
tensor
2 $0.71 36.800 Launch
teslaa10-2.16.64.160
32,768.0
tensor
2 $0.93 99.200 Launch
rtx3080-2.16.64.160
32,768.0
tensor
2 $1.03 32.000 Launch
rtx4090-1.32.64.160
32,768.0
1 $1.18 48.267 Launch
rtxa5000-2.16.64.160.nvlink
32,768.0
tensor
2 $1.23 99.200 Launch
rtx3090-2.16.64.160
32,768.0
tensor
2 $1.56 99.200 Launch
rtx5090-1.32.64.160
32,768.0
1 $1.69 67.467 Launch
teslaa10-4.16.128.160
32,768.0
pipeline
4 $1.75 201.067 Launch
teslaa100-1.16.64.160
32,768.0
1 $2.37 182.667 Launch
rtx3090-4.16.128.160
32,768.0
pipeline
4 $3.01 201.067 Launch
h100-1.16.64.160
32,768.0
1 $3.83 182.667 Launch
h100nvl-1.16.96.160
32,768.0
1 $4.11 216.267 Launch
teslaa100-2.24.256.160.nvlink
32,768.0
tensor
2 $4.93 368.000 Launch
h200-2.24.256.160.nvlink
32,768.0
tensor
2 $9.40 660.800 Launch
h200-2.32.384.320
32,768.0
tensor
2 $9.72 660.800 Launch
h200-4.32.768.480
32,768.0
pipeline
4 $19.23 1,324.267 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.