Qwen3-8B

reasoning

Qwen3-8B represents a new tier in the series. With 8.2 billion parameters, this model retains an architecture of 36 layers and 32 attention heads, but introduces several important improvements — it no longer uses tied embeddings, and its context window has been expanded to 128K tokens, enabling excellent capabilities for handling long documents and complex tasks.

The doubling of parameter count compared to the 4B version significantly enhances response quality across all task types, especially in mathematical reasoning, programming, and advanced analysis. The model excels in tasks requiring multi-step reasoning and deep contextual understanding. Built-in support for both *thinking* and *non-thinking* modes allows performance optimization based on task complexity and available processing time, while the *Thinking Budget* mechanism enables fine-grained control over computational intensity for optimal efficiency.

Qwen3-8B is ideal for advanced professional applications such as financial analysis, medical diagnostics, and legal practice. It is well-suited for building intelligent assistants for professionals, automated technical documentation systems, and educational platforms.


Announce Date: 29.04.2025
Parameters: 8.2B
Context: 131K
Attention Type: Full or Sliding Window Attention
VRAM requirements: 21.8 GB using 4 bits quantization
Developer: Alibaba
Transformers Version: 4.51.0
Ollama Version: 0.6.6
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore Qwen3-8B capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen3-8B

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
teslat4-2.16.32.160 16 32768 160 2 $0.80 Launch
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-3.16.64.160 16 65536 160 3 $0.95 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx3080-3.16.64.160 16 65536 160 3 $1.43 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
teslat4-2.16.32.160 16 32768 160 2 $0.80 Launch
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-3.16.64.160 16 65536 160 3 $0.95 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx3080-3.16.64.160 16 65536 160 3 $1.43 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-4.16.64.160 16 65536 160 4 $1.18 Launch
teslat4-4.16.64.160 16 65536 160 4 $1.48 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx3080-4.16.64.160 16 65536 160 4 $1.82 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160 16 65536 160 2 $2.93 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.