The flagship open reasoning model from OpenAI, carrying forward the company's best scientific advancements and achievements used in the renowned ChatGPT. This model features a unique MoE (Mixture of Experts) architecture with 116.8 billion parameters, yet activates only 5.1 billion parameters per token, and is equipped with numerous innovations that efficiently balance performance and resource consumption—enabling the model to run on a single 80GB GPU. GPT-OSS-120B supports a three-level reasoning system and, for the first time in open models, introduces an extended role hierarchy and output channels aligned with specific roles, collectively allowing users to precisely customize and control the model's behavior.