gpt-oss-20b

reasoning

The GPT-OSS-20B model represents a remarkable breakthrough in compact language modeling, demonstrating that size does not always dictate performance. With 20.9 billion total parameters and only 3.6 billion activated per token, it achieves exceptional efficiency, running on devices with as little as 16GB of memory—enabled by innovative native MXFP4 quantization. The architecture consists of 24 layers, each equipped with 32 experts, of which only the top-4 are activated per token, ensuring an optimal balance between computational performance and hardware accessibility. The model inherits the same key architectural optimizations as its larger counterpart: alternating attention patterns (full and windowed), Grouped Query Attention (GQA), rotary position embeddings, and learned attention biases, all implemented in a significantly more compact form factor.

The technical sophistication of GPT-OSS-20B is evident in its ability to generate extremely long reasoning chains—averaging over 20,000 CoT (Chain-of-Thought) tokens per problem on the AIME benchmark—enabling it to compete with significantly larger models. On high-level mathematical reasoning tasks such as AIME 2025, it achieves a remarkable accuracy of 98.7%, surpassing OpenAI's o3-mini (86.5%). In programming tasks, the model reaches an Elo rating of 2516 on Codeforces and 60.7% on SWE-Bench Verified. On HealthBench, it scores 42.5%, outperforming both OpenAI o1 (41.8%) and o3-mini (37.8%), indicating strong potential for medical research and clinical applications.

The practical value of GPT-OSS-20B lies in its combination of versatility and accessibility under the permissive Apache 2.0 license. Trained with the same CoT and reinforcement learning (RL) techniques as OpenAI’s o3 series, it supports a full suite of agent capabilities—including enterprise-grade tool use (web search, Python script execution in sandboxed environments, and arbitrary developer-defined function calls). Remarkably, thanks to native MXFP4 quantization, the entire model fits within just 12.8 GiB of GPU memory. This makes GPT-OSS-20B ideal for local deployment, rapid prototyping, and resource-constrained environments where a fine balance between advanced AI capabilities and hardware limitations is required.


Announce Date: 05.08.2025
Parameters: 20.91B
Experts: 32
Activated: 3.61B
Context: 131K
Attention Type: Sliding Window Attention
VRAM requirements: 12.9 GB using 4 bits quantization
Developer: OpenAI
Transformers Version: 4.55.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore gpt-oss-20b capabilities.
Model Name Context Type GPU TPS Status Link
openai/gpt-oss-20b 131,072.0 Public RTX3090 61.74 AVAILABLE try

API access to gpt-oss-20b endpoints

curl https://chat.immers.cloud/v1/endpoints/gpt-oss-20b/generate/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer USER_API_KEY" \
-d '{"model": "gpt-oss-20b", "messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"}
], "temperature": 0, "max_tokens": 150}'
$response = Invoke-WebRequest https://chat.immers.cloud/v1/endpoints/gpt-oss-20b/generate/chat/completions `
-Method POST `
-Headers @{
"Authorization" = "Bearer USER_API_KEY"
"Content-Type" = "application/json"
} `
-Body (@{
model = "gpt-oss-20b"
messages = @(
@{ role = "system"; content = "You are a helpful assistant." },
@{ role = "user"; content = "Say this is a test" }
)
} | ConvertTo-Json)
($response.Content | ConvertFrom-Json).choices[0].message.content
#!pip install OpenAI --upgrade

from openai import OpenAI

client = OpenAI(
api_key="USER_API_KEY",
base_url="https://chat.immers.cloud/v1/endpoints/gpt-oss-20b/generate/",
)

chat_response = client.chat.completions.create(
model="gpt-oss-20b",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"},
]
)
print(chat_response.choices[0].message.content)

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting gpt-oss-20b

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160 16 16384 160 1 $0.46 Launch
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
rtx2080ti-2.12.64.160 12 65536 160 2 $0.69 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx3080-2.16.32.160 16 32762 160 2 $0.97 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa2-2.16.32.160 16 32768 160 2 $0.57 Launch
teslat4-2.16.32.160 16 32768 160 2 $0.80 Launch
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
rtx2080ti-3.16.64.160 16 65536 160 3 $0.95 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx3080-3.16.64.160 16 65536 160 3 $1.43 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-2.16.64.160 16 65536 160 2 $0.93 Launch
teslat4-4.16.64.160 16 65536 160 4 $1.48 Launch
rtx3090-2.16.64.160 16 65536 160 2 $1.67 Launch
rtx4090-2.16.64.160 16 65536 160 2 $2.19 Launch
teslav100-2.16.64.240 16 65535 240 2 $2.22 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.58 Launch
rtx5090-2.16.64.160 16 65536 160 2 $2.93 Launch
teslah100-1.16.64.160 16 65536 160 1 $5.11 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.