Qwen3-32B

reasoning

Qwen3-32B is the most powerful dense model in the series, featuring 32 billion parameters, a 64-layer architecture, and 64 attention heads, with support for a context window of 128K tokens. This model represents the pinnacle of dense architecture within the Qwen3 lineup, delivering performance comparable to leading proprietary solutions across most tasks. Developers emphasize that thanks to architectural innovations and training on 36 trillion tokens of high-quality data, Qwen3-32B achieves quality on par with Qwen2.5-72B, but with twice as few parameters.

The model demonstrates outstanding results across all benchmarks, particularly excelling in programming, mathematical problem-solving, and knowledge-intensive domains in science and engineering. Qwen3-32B is capable of handling tasks at the level of senior-level experts and delivers quality suitable for mission-critical commercial applications. Full support for all 119 languages at the highest quality makes this model a universal solution for applications requiring international reach.

This model is designed for flagship products from major technology companies, national research initiatives, mission-critical AI systems, and any application where quality is the top priority. Qwen3-32B is ideal for building premium-tier AI assistants, advanced analytical systems, professional-grade development tools, and any use cases demanding the highest level of natural language processing quality.


Announce Date: 29.04.2025
Parameters: 32.8B
Context: 131K
Attention Type: Full or Sliding Window Attention
VRAM requirements: 47.3 GB using 4 bits quantization
Developer: Alibaba
Transformers Version: 4.51.0
Ollama Version: 0.6.6
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore Qwen3-32B capabilities.
Model Name Context Type GPU TPS Status Link
Qwen/QwQ-32B-AWQ 40,960.0 Public 2×RTX4090 40.00 AVAILABLE try

API access to Qwen3-32B endpoints

curl https://chat.immers.cloud/v1/endpoints/Qwen-3-32b/generate/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer USER_API_KEY" \
-d '{"model": "Qwen-3-32b", "messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"}
], "temperature": 0, "max_tokens": 150}'
$response = Invoke-WebRequest https://chat.immers.cloud/v1/endpoints/Qwen-3-32b/generate/chat/completions `
-Method POST `
-Headers @{
"Authorization" = "Bearer USER_API_KEY"
"Content-Type" = "application/json"
} `
-Body (@{
model = "Qwen-3-32b"
messages = @(
@{ role = "system"; content = "You are a helpful assistant." },
@{ role = "user"; content = "Say this is a test" }
)
} | ConvertTo-Json)
($response.Content | ConvertFrom-Json).choices[0].message.content
#!pip install OpenAI --upgrade

from openai import OpenAI

client = OpenAI(
api_key="USER_API_KEY",
base_url="https://chat.immers.cloud/v1/endpoints/Qwen-3-32b/generate/",
)

chat_response = client.chat.completions.create(
model="Qwen-3-32b",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"},
]
)
print(chat_response.choices[0].message.content)

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Qwen3-32B

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-3.16.96.160 16 98304 160 3 $1.34 Launch
teslat4-4.16.64.160 16 65536 160 4 $1.48 Launch
rtx3090-3.16.96.160 16 98304 160 3 $2.45 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160 16 65536 160 2 $2.93 Launch
rtx4090-3.16.96.160 16 98304 160 3 $3.23 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-3.16.96.160 16 98304 160 3 $1.34 Launch
rtx3090-3.16.96.160 16 98304 160 3 $2.45 Launch
teslaa100-1.16.128.160 16 131072 160 1 $2.71 Launch
rtx4090-3.16.96.160 16 98304 160 3 $3.23 Launch
rtx5090-3.16.96.160 16 98304 160 3 $4.34 Launch
teslah100-1.16.128.160 16 131072 160 1 $5.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.256.160 24 262144 160 2 $5.35 Launch
rtx5090-4.16.128.160 16 131072 160 4 $5.74 Launch
teslah100-2.24.256.160 24 262144 160 2 $10.40 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.