The GPT-OSS-20B model represents a remarkable breakthrough in compact language modeling, demonstrating that size does not always dictate performance. With 20.9 billion total parameters and only 3.6 billion activated per token, it achieves exceptional efficiency, running on devices with as little as 16GB of memory—enabled by innovative native MXFP4 quantization. The architecture consists of 24 layers, each equipped with 32 experts, of which only the top-4 are activated per token, ensuring an optimal balance between computational performance and hardware accessibility. The model inherits the same key architectural optimizations as its larger counterpart: alternating attention patterns (full and windowed), Grouped Query Attention (GQA), rotary position embeddings, and learned attention biases, all implemented in a significantly more compact form factor.
The technical sophistication of GPT-OSS-20B is evident in its ability to generate extremely long reasoning chains—averaging over 20,000 CoT (Chain-of-Thought) tokens per problem on the AIME benchmark—enabling it to compete with significantly larger models. On high-level mathematical reasoning tasks such as AIME 2025, it achieves a remarkable accuracy of 98.7%, surpassing OpenAI's o3-mini (86.5%). In programming tasks, the model reaches an Elo rating of 2516 on Codeforces and 60.7% on SWE-Bench Verified. On HealthBench, it scores 42.5%, outperforming both OpenAI o1 (41.8%) and o3-mini (37.8%), indicating strong potential for medical research and clinical applications.
The practical value of GPT-OSS-20B lies in its combination of versatility and accessibility under the permissive Apache 2.0 license. Trained with the same CoT and reinforcement learning (RL) techniques as OpenAI’s o3 series, it supports a full suite of agent capabilities—including enterprise-grade tool use (web search, Python script execution in sandboxed environments, and arbitrary developer-defined function calls). Remarkably, thanks to native MXFP4 quantization, the entire model fits within just 12.8 GiB of GPU memory. This makes GPT-OSS-20B ideal for local deployment, rapid prototyping, and resource-constrained environments where a fine balance between advanced AI capabilities and hardware limitations is required.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|---|---|---|---|---|---|
openai/gpt-oss-20b | 131,072.0 | Public | RTX3090 | 61.74 | AVAILABLE | try |
curl https://chat.immers.cloud/v1/endpoints/gpt-oss-20b/generate/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer USER_API_KEY" \
-d '{"model": "gpt-oss-20b", "messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"}
], "temperature": 0, "max_tokens": 150}'
$response = Invoke-WebRequest https://chat.immers.cloud/v1/endpoints/gpt-oss-20b/generate/chat/completions `
-Method POST `
-Headers @{
"Authorization" = "Bearer USER_API_KEY"
"Content-Type" = "application/json"
} `
-Body (@{
model = "gpt-oss-20b"
messages = @(
@{ role = "system"; content = "You are a helpful assistant." },
@{ role = "user"; content = "Say this is a test" }
)
} | ConvertTo-Json)
($response.Content | ConvertFrom-Json).choices[0].message.content
#!pip install OpenAI --upgrade
from openai import OpenAI
client = OpenAI(
api_key="USER_API_KEY",
base_url="https://chat.immers.cloud/v1/endpoints/gpt-oss-20b/generate/",
)
chat_response = client.chat.completions.create(
model="gpt-oss-20b",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Say this is a test"},
]
)
print(chat_response.choices[0].message.content)
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 16384 | 160 | 1 | $0.46 | Launch | ||
16 | 32768 | 160 | 1 | $0.53 | Launch | ||
16 | 32768 | 160 | 2 | $0.57 | Launch | ||
12 | 65536 | 160 | 2 | $0.69 | Launch | ||
16 | 24576 | 160 | 1 | $0.88 | Launch | ||
16 | 32762 | 160 | 2 | $0.97 | Launch | ||
16 | 32768 | 160 | 1 | $1.15 | Launch | ||
12 | 65536 | 160 | 1 | $1.20 | Launch | ||
16 | 65536 | 160 | 1 | $1.59 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 32768 | 160 | 2 | $0.57 | Launch | ||
16 | 32768 | 160 | 2 | $0.80 | Launch | ||
16 | 65536 | 160 | 2 | $0.93 | Launch | ||
16 | 65536 | 160 | 3 | $0.95 | Launch | ||
12 | 65536 | 160 | 1 | $1.20 | Launch | ||
16 | 65536 | 160 | 3 | $1.43 | Launch | ||
16 | 65536 | 160 | 1 | $1.59 | Launch | ||
16 | 65536 | 160 | 2 | $1.67 | Launch | ||
16 | 65536 | 160 | 2 | $2.19 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 65536 | 160 | 2 | $0.93 | Launch | ||
16 | 65536 | 160 | 4 | $1.48 | Launch | ||
16 | 65536 | 160 | 2 | $1.67 | Launch | ||
16 | 65536 | 160 | 2 | $2.19 | Launch | ||
16 | 65535 | 240 | 2 | $2.22 | Launch | ||
16 | 65536 | 160 | 1 | $2.58 | Launch | ||
16 | 65536 | 160 | 2 | $2.93 | Launch | ||
16 | 65536 | 160 | 1 | $5.11 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.