Seed-OSS-36B is an innovative open large language model developed by ByteDance's Seed team to address a broad range of AI tasks requiring high reasoning power, long-context processing, and flexible agent-based architectures. Built upon the classic Transformer architecture, it incorporates state-of-the-art techniques: Rotary Position Embedding (RoPE) for adaptive handling of extremely long sequences (up to 512,000 tokens), Grouped Query Attention (GQA) for faster and more stable processing, RMSNorm normalization, and SwiGLU activation for high-quality training and generation. The model comprises 64 hidden layers, 36 billion parameters, 80 attention heads (with 8 KV heads), and a head dimension of 128. This configuration enables efficient performance across both text generation and complex reasoning, programming, and agent interaction tasks.A key feature of the Seed-OSS approach is native support for ultra-long context without requiring additional techniques—making it particularly valuable for processing large documents, files, and extended dialogues.
Seed-OSS-36B achieves top-tier results on major global benchmarks within its class, leading in benchmarks such as LiveCodeBench, MMLU, AIME24/25, GPQA-D, and RULER. It also demonstrates state-of-the-art instruction-following capabilities on IFEval and excels in agent-oriented tasks like TAU1-Retail and SWE-Bench Verified.
A distinctive feature of Seed-OSS-36B-Instruct is its flexible "thinking budget" system: users can define reasoning limits to achieve an optimal balance between speed and depth of reasoning chains. This is especially important for optimizing the model across diverse practical applications—from fast chatbots to deep analytical use cases in business, science, and education.Designed for international deployment, the model supports multilingual capabilities and, importantly, is released under the open Apache-2.0 license. Technically, Seed-OSS-36B is compatible with standard Transformers libraries as well as modern inference frameworks such as vLLM API, ensuring seamless integration into industrial applications, cloud services, and research projects.
Model Name | Context | Type | GPU | TPS | Status | Link |
---|
There are no public endpoints for this model yet.
Rent your own physically dedicated instance with hourly or long-term monthly billing.
We recommend deploying private instances in the following scenarios:
Name | vCPU | RAM, MB | Disk, GB | GPU | |||
---|---|---|---|---|---|---|---|
16 | 131072 | 160 | 4 | $1.75 | Launch | ||
16 | 131072 | 160 | 4 | $3.23 | Launch | ||
16 | 131072 | 160 | 4 | $4.26 | Launch | ||
16 | 98304 | 160 | 3 | $4.34 | Launch | ||
24 | 262144 | 160 | 2 | $5.35 | Launch | ||
24 | 262144 | 160 | 2 | $10.40 | Launch |
Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.