Qwen3-14B

reasoning

Qwen3-14B is a 14-billion-parameter model featuring a deeper architecture with 40 layers and an increased number of attention heads (40/8). It supports a context window of 40K tokens and does not use tied embeddings, ensuring maximum flexibility and diversity in responses.

The model delivers exceptional performance in tasks requiring expert-level knowledge and complex analysis. Its support for 119 languages, combined with advanced hybrid reasoning capabilities, makes it ideal for high-complexity international projects.

Qwen3-14B is designed for enterprise solutions and research initiatives — including automation of complex business processes, scientific research, AI product development, and the creation of specialized expert systems. The model is perfectly suited for companies in need of a high-quality AI assistant for strategic planning, technical consulting, and innovative product development.


Announce Date: 29.04.2025
Parameters: 15B
Context: 41K
Layers: 40
Attention Type: Full or Sliding Window Attention
Developer: Qwen
Transformers Version: 4.51.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Qwen3-14B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Qwen3-14B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-2.16.32.160
40,960.0
tensor
2 $0.57 8.950 2.211 Launch
rtx2080ti-3.12.24.120
40,960.0
pipeline
3 $0.84 1.955 Launch
teslat4-3.32.64.200
40,960.0
pipeline
3 $0.88 4.115 Launch
rtx2080ti-4.16.32.160
40,960.0
tensor
4 $1.12 3.139 Launch
rtx4090-1.32.64.160
40,960.0
1 $1.18 1.459 Launch
rtxa5000-2.16.64.160.nvlink
40,960.0
tensor
2 $1.23 4.515 Launch
teslat4-4.48.192.320
40,960.0
tensor
4 $1.43 6.019 Launch
rtx3080-3.16.96.160
40,960.0
pipeline
3 $1.49 1.523 Launch
rtx5090-1.32.64.160
40,960.0
1 $1.69 2.611 Launch
teslaa10-4.16.128.160
40,960.0
tensor
4 $1.75 10.627 Launch
rtx3080-4.16.96.160
40,960.0
tensor
4 $1.88 2.563 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 52.520 9.523 Launch
rtx3090-4.16.128.160
40,960.0
tensor
4 $3.01 10.627 Launch
h100-1.16.64.160
40,960.0
1 $3.83 64.210 9.523 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 79.950 11.539 Launch
teslaa100-2.24.256.160.nvlink
40,960.0
tensor
2 $4.93 20.643 Launch
h200-2.24.256.160.nvlink
40,960.0
tensor
2 $9.40 38.211 Launch
h200-4.32.768.480
40,960.0
tensor
4 $19.23 78.019 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa2-2.16.32.160
40,960.0
tensor
2 $0.57 1.195 Launch
rtx2080ti-3.12.24.120
40,960.0
pipeline
3 $0.84 0.939 Launch
teslat4-3.32.64.200
40,960.0
pipeline
3 $0.88 3.099 Launch
rtx2080ti-4.16.32.160
40,960.0
tensor
4 $1.12 2.123 Launch
rtxa5000-2.16.64.160.nvlink
40,960.0
tensor
2 $1.23 3.499 Launch
teslat4-4.48.192.320
40,960.0
tensor
4 $1.43 5.003 Launch
rtx5090-1.32.64.160
40,960.0
1 $1.69 1.595 Launch
teslaa10-4.16.128.160
40,960.0
tensor
4 $1.75 9.611 Launch
rtx3080-4.16.96.160
40,960.0
tensor
4 $1.88 1.547 Launch
rtx4090-2.16.64.160
40,960.0
tensor
2 $1.92 3.499 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 8.507 Launch
rtx3090-4.16.128.160
40,960.0
tensor
4 $3.01 9.611 Launch
h100-1.16.64.160
40,960.0
1 $3.83 8.507 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 10.523 Launch
teslaa100-2.24.256.160.nvlink
40,960.0
tensor
2 $4.93 19.627 Launch
h200-2.24.256.160.nvlink
40,960.0
tensor
2 $9.40 37.195 Launch
h200-4.32.768.480
40,960.0
tensor
4 $19.23 77.003 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-3.32.64.200
40,960.0
pipeline
3 $0.88 0.990 Launch
rtxa5000-2.16.64.160.nvlink
40,960.0
tensor
2 $1.23 1.390 Launch
teslaa2-4.32.128.480
40,960.0
tensor
4 $1.29 2.894 Launch
teslaa2-3.32.256.160
40,960.0
pipeline
3 $1.31 0.990 Launch
teslat4-4.48.192.320
40,960.0
tensor
4 $1.43 2.894 Launch
teslaa10-4.16.128.160
40,960.0
tensor
4 $1.75 7.502 Launch
rtx4090-2.16.64.160
40,960.0
tensor
2 $1.92 1.390 Launch
teslaa100-1.16.64.160
40,960.0
1 $2.37 6.398 Launch
rtx5090-2.16.64.160
40,960.0
tensor
2 $2.93 3.694 Launch
rtx3090-4.16.128.160
40,960.0
tensor
4 $3.01 7.502 Launch
h100-1.16.64.160
40,960.0
1 $3.83 6.398 Launch
h100nvl-1.16.96.160
40,960.0
1 $4.11 8.414 Launch
teslaa100-2.24.256.160.nvlink
40,960.0
tensor
2 $4.93 17.518 Launch
h200-2.24.256.160.nvlink
40,960.0
tensor
2 $9.40 35.086 Launch
h200-4.32.768.480
40,960.0
tensor
4 $19.23 74.894 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.