gpt-oss-120b

reasoning

On August 5, 2025, OpenAI launched the gpt-oss series - the first open models since the legendary GPT-2—making a bold entry into the competitive open-source LLM market. The flagship GPT-OSS-120B features 116.8 billion total parameters with an innovative Mixture-of-Experts (MoE) architecture that activates only 5.1 billion parameters per token. Leveraging native MXFP4 quantization, the model can run efficiently on a single 80GB GPU.

The model architecture comprises 36 hidden layers with 128 experts, of which only the top-4 are activated per token. It employs an alternating pattern of full-attention and windowed attention layers, with a window width of 128 tokens, and uses Grouped Query Attention (GQA) with 8 key-value heads. Each of the 64 attention heads includes a learned bias in the softmax denominator, similar to off-by-one attention and attention sinks, enabling the model to downweight or nearly ignore specific tokens. These architectural innovations allow efficient attention management, optimized VRAM usage, and seamless support for contexts up to 131,072 tokens via YaRN extension.

GPT-OSS-120B was trained using the new Harmony Chat format, which introduces a role hierarchy (System > Developer > User > Assistant > Tool) to resolve instruction conflicts, and an innovative channel system to route generated output: analysis for reasoning chains (CoT), commentary for tool calls, and final for user-facing responses. This enables fine-grained control over generation—such as interleaving function calls within reasoning or automatically removing prior assistant reasoning traces from the context. Another key innovation is Variable Effort Reasoning, a three-tier reasoning system (low, medium, high), allowing users to dynamically balance speed and accuracy based on task complexity.

In practical applications, GPT-OSS-120B achieves impressive results, surpassing OpenAI's o3-mini across many benchmarks and approaching o4-mini performance on key tasks. On the AIME 2024 and 2025 math olympiads, it scores 96.6% and 97.9%, respectively. Its agent capabilities shine through exceptional tool use, including web search, Python code execution in Jupyter environments, and calling arbitrary user-defined functions. On Codeforces, the model reaches a rating of 2622; on SWE-Bench Verified, it achieves 62.4% accuracy—13 percentage points higher than o3-mini; and on Tau-Bench Retail, it demonstrates 67.8% accuracy in function-calling tasks.

The model is released under the open Apache 2.0 license (with minor附加 terms) and is integrated with leading platforms and GPU vendors, enabling rapid deployment and seamless integration into research projects or commercial products.


Announce Date: 05.08.2025
Parameters: 117B
Experts: 128
Activated: 5.1B
Context: 131K
Attention Type: Sliding Window Attention
VRAM requirements: 59.3 GB using 4 bits quantization
Developer: OpenAI
Transformers Version: 4.55.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints to test inference and explore gpt-oss-120b capabilities.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting gpt-oss-120b

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa10-3.16.96.160 16 98304 160 3 $1.34 Launch
rtx3090-3.16.96.160 16 98304 160 3 $2.45 Launch
teslaa100-1.16.128.160 16 131072 160 1 $2.71 Launch
rtx4090-3.16.96.160 16 98304 160 3 $3.23 Launch
rtx5090-3.16.96.160 16 98304 160 3 $4.34 Launch
teslah100-1.16.128.160 16 131072 160 1 $5.23 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-2.24.256.240 24 262144 240 2 $5.36 Launch
rtx5090-4.16.128.320 16 131072 320 4 $5.76 Launch
teslah100-2.24.256.240 24 262144 240 2 $10.41 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-4.44.512.320 44 524288 320 4 $10.68 Launch
teslah100-4.44.512.320 44 524288 320 4 $20.77 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.