On August 5, 2025, OpenAI launched the gpt-oss series - the first open models since the legendary GPT-2—making a bold entry into the competitive open-source LLM market. The flagship GPT-OSS-120B features 116.8 billion total parameters with an innovative Mixture-of-Experts (MoE) architecture that activates only 5.1 billion parameters per token. Leveraging native MXFP4 quantization, the model can run efficiently on a single 80GB GPU.
The model architecture comprises 36 hidden layers with 128 experts, of which only the top-4 are activated per token. It employs an alternating pattern of full-attention and windowed attention layers, with a window width of 128 tokens, and uses Grouped Query Attention (GQA) with 8 key-value heads. Each of the 64 attention heads includes a learned bias in the softmax denominator, similar to off-by-one attention and attention sinks, enabling the model to downweight or nearly ignore specific tokens. These architectural innovations allow efficient attention management, optimized VRAM usage, and seamless support for contexts up to 131,072 tokens via YaRN extension.
GPT-OSS-120B was trained using the new Harmony Chat format, which introduces a role hierarchy (System > Developer > User > Assistant > Tool) to resolve instruction conflicts, and an innovative channel system to route generated output: analysis for reasoning chains (CoT), commentary for tool calls, and final for user-facing responses. This enables fine-grained control over generation—such as interleaving function calls within reasoning or automatically removing prior assistant reasoning traces from the context. Another key innovation is Variable Effort Reasoning, a three-tier reasoning system (low, medium, high), allowing users to dynamically balance speed and accuracy based on task complexity.
In practical applications, GPT-OSS-120B achieves impressive results, surpassing OpenAI's o3-mini across many benchmarks and approaching o4-mini performance on key tasks. On the AIME 2024 and 2025 math olympiads, it scores 96.6% and 97.9%, respectively. Its agent capabilities shine through exceptional tool use, including web search, Python code execution in Jupyter environments, and calling arbitrary user-defined functions. On Codeforces, the model reaches a rating of 2622; on SWE-Bench Verified, it achieves 62.4% accuracy—13 percentage points higher than o3-mini; and on Tau-Bench Retail, it demonstrates 67.8% accuracy in function-calling tasks.
The model is released under the open Apache 2.0 license (with minor附加 terms) and is integrated with leading platforms and GPU vendors, enabling rapid deployment and seamless integration into research projects or commercial products.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 98304 | 160 | 3 | $1.34 | Launch | ||
16 | 98304 | 160 | 3 | $2.45 | Launch | ||
16 | 131072 | 160 | 1 | $2.71 | Launch | ||
16 | 98304 | 160 | 3 | $3.23 | Launch | ||
16 | 98304 | 160 | 3 | $4.34 | Launch | ||
16 | 131072 | 160 | 1 | $5.23 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.