Seed-OSS-36B

reasoning

Seed-OSS-36B is an innovative open large language model developed by ByteDance's Seed team to address a broad range of AI tasks requiring high reasoning power, long-context processing, and flexible agent-based architectures. Built upon the classic Transformer architecture, it incorporates state-of-the-art techniques: Rotary Position Embedding (RoPE) for adaptive handling of extremely long sequences (up to 512,000 tokens), Grouped Query Attention (GQA) for faster and more stable processing, RMSNorm normalization, and SwiGLU activation for high-quality training and generation. The model comprises 64 hidden layers, 36 billion parameters, 80 attention heads (with 8 KV heads), and a head dimension of 128. This configuration enables efficient performance across both text generation and complex reasoning, programming, and agent interaction tasks.A key feature of the Seed-OSS approach is native support for ultra-long context without requiring additional techniques—making it particularly valuable for processing large documents, files, and extended dialogues.

Seed-OSS-36B achieves top-tier results on major global benchmarks within its class, leading in benchmarks such as LiveCodeBench, MMLU, AIME24/25, GPQA-D, and RULER. It also demonstrates state-of-the-art instruction-following capabilities on IFEval and excels in agent-oriented tasks like TAU1-Retail and SWE-Bench Verified.

A distinctive feature of Seed-OSS-36B-Instruct is its flexible "thinking budget" system: users can define reasoning limits to achieve an optimal balance between speed and depth of reasoning chains. This is especially important for optimizing the model across diverse practical applications—from fast chatbots to deep analytical use cases in business, science, and education.Designed for international deployment, the model supports multilingual capabilities and, importantly, is released under the open Apache-2.0 license. Technically, Seed-OSS-36B is compatible with standard Transformers libraries as well as modern inference frameworks such as vLLM API, ensuring seamless integration into industrial applications, cloud services, and research projects.


Announce Date: 20.08.2025
Parameters: 37B
Context: 525K
Layers: 64
Attention Type: Full Attention
Developer: ByteDance-Seed
Transformers Version: 4.55.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Seed-OSS-36B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Seed-OSS-36B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-2.16.32.160
14,000.0
tensor
2 $0.54 2.031 Launch
teslaa2-2.16.32.160
14,000.0
tensor
2 $0.57 2.031 Launch
rtx2080ti-3.12.24.120
14,000.0
pipeline
3 $0.84 1.563 Launch
teslaa10-2.16.64.160
14,000.0
tensor
2 $0.93 6.244 Launch
rtx2080ti-4.16.32.160
14,000.0
tensor
4 $1.12 3.728 Launch
teslav100-1.12.64.160
14,000.0
1 $1.20 2.763 Launch
rtxa5000-2.16.64.160.nvlink
14,000.0
tensor
2 $1.23 6.244 Launch
rtx3080-3.16.64.160
14,000.0
pipeline
3 $1.43 0.773 Launch
rtx3090-2.16.64.160
14,000.0
tensor
2 $1.56 6.244 Launch
rtx5090-1.16.64.160
14,000.0
1 $1.59 2.763 Launch
teslaa10-4.16.64.160
168,000.0
tensor
4 $1.62 1.452 Launch
teslaa2-6.32.128.160
168,000.0
pipeline
6 $1.65 1.330 Launch
rtx3080-4.16.64.160
14,000.0
tensor
4 $1.82 2.675 Launch
rtx4090-2.16.64.160
14,000.0
tensor
2 $1.92 6.244 Launch
rtxa5000-4.16.128.160.nvlink
168,000.0
tensor
4 $2.34 1.452 Launch
teslaa100-1.16.64.160
168,000.0
1 $2.37 1.283 Launch
rtx3090-4.16.64.160
168,000.0
tensor
4 $2.89 1.452 Launch
rtx4090-4.16.64.160
168,000.0
tensor
4 $3.60 1.452 Launch
h100-1.16.64.160
168,000.0
1 $3.83 1.283 Launch
teslav100-3.64.256.320
168,000.0
pipeline
3 $3.89 1.513 Launch
h100nvl-1.16.96.160
168,000.0
1 $4.11 1.591 Launch
teslav100-4.32.64.160
168,000.0
tensor
4 $4.28 2.154 Launch
rtx5090-3.16.96.160
168,000.0
pipeline
3 $4.34 1.513 Launch
rtxa5000-8.24.256.160.nvlink
524,288.0
tensor
8 $4.61 1.062 Launch
h200-1.16.128.160
168,000.0
1 $4.74 2.622 Launch
rtx5090-4.16.128.160
168,000.0
tensor
4 $5.74 2.154 Launch
teslaa100-3.32.384.160
524,288.0
pipeline
3 $7.35 1.497 Launch
rtx4090-8.44.256.160
524,288.0
tensor
8 $7.51 1.062 Launch
h100nvl-2.24.192.240
524,288.0
tensor
2 $8.17 1.151 Launch
rtx5090-6.44.256.160
524,288.0
pipeline
6 $8.86 1.101 Launch
teslaa100-4.16.256.120
524,288.0
tensor
4 $9.13 2.040 Launch
h200-2.24.256.160
524,288.0
tensor
2 $9.40 1.812 Launch
rtx5090-8.44.256.160
524,288.0
tensor
8 $11.54 1.512 Launch
h100-3.32.384.160
524,288.0
pipeline
3 $11.72 1.497 Launch
h100-4.16.256.120
524,288.0
tensor
4 $14.95 2.040 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
14,000.0
tensor
4 $0.96 2.762 Launch
teslaa2-4.32.128.160
14,000.0
tensor
4 $1.26 2.762 Launch
teslaa10-3.16.96.160
14,000.0
pipeline
3 $1.34 5.600 Launch
teslaa10-4.12.48.160
14,000.0
tensor
4 $1.57 11.188 Launch
teslav100-2.16.64.240
14,000.0
tensor
2 $2.22 4.225 Launch
rtx3090-3.16.96.160
14,000.0
pipeline
3 $2.29 5.600 Launch
rtxa5000-4.16.128.160.nvlink
14,000.0
tensor
4 $2.34 11.188 Launch
teslaa100-1.16.64.160
14,000.0
1 $2.37 9.169 Launch
rtx4090-3.16.96.160
14,000.0
pipeline
3 $2.83 5.600 Launch
rtx3090-4.16.64.160
14,000.0
tensor
4 $2.89 11.188 Launch
rtx5090-2.16.64.160
14,000.0
tensor
2 $2.93 4.225 Launch
rtxa5000-6.24.192.160.nvlink
168,000.0
pipeline
6 $3.50 1.864 Launch
rtx4090-4.16.64.160
14,000.0
tensor
4 $3.60 11.188 Launch
h100-1.16.64.160
14,000.0
1 $3.83 9.169 Launch
teslav100-3.64.256.320
168,000.0
pipeline
3 $3.89 0.993 Launch
h100nvl-1.16.96.160
168,000.0
1 $4.11 1.071 Launch
rtx5090-3.16.96.160
168,000.0
pipeline
3 $4.34 0.993 Launch
teslav100-4.32.96.160
168,000.0
tensor
4 $4.35 1.634 Launch
teslaa100-2.24.96.160.nvlink
168,000.0
tensor
2 $4.61 2.459 Launch
rtxa5000-8.24.256.160.nvlink
168,000.0
tensor
8 $4.61 2.795 Launch
h200-1.16.128.160
168,000.0
1 $4.74 2.103 Launch
rtx5090-4.16.128.160
168,000.0
tensor
4 $5.74 1.634 Launch
rtx4090-6.44.256.160
168,000.0
pipeline
6 $5.83 1.864 Launch
teslaa100-3.32.384.160
524,288.0
pipeline
3 $7.35 1.331 Launch
rtx4090-8.44.256.160
168,000.0
tensor
8 $7.51 2.795 Launch
h100-2.24.256.160
168,000.0
tensor
2 $7.84 2.459 Launch
teslaa100-4.16.256.120
524,288.0
tensor
4 $9.13 1.874 Launch
h200-2.24.256.160
524,288.0
tensor
2 $9.40 1.646 Launch
rtx5090-8.44.256.160
524,288.0
tensor
8 $11.54 1.346 Launch
h100-3.32.384.160
524,288.0
pipeline
3 $11.72 1.331 Launch
h100nvl-3.24.384.480
524,288.0
pipeline
3 $12.38 1.626 Launch
h100-4.16.256.120
524,288.0
tensor
4 $14.95 1.874 Launch
h100nvl-4.32.384.480
524,288.0
tensor
4 $16.23 2.267 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.128.160
14,000.0
tensor
4 $1.75 1.202 Launch
rtxa5000-4.16.128.160.nvlink
14,000.0
tensor
4 $2.34 1.202 Launch
rtx3090-4.16.96.320
14,000.0
tensor
4 $2.97 1.202 Launch
rtxa5000-6.24.192.160.nvlink
168,000.0
pipeline
6 $3.50 1.032 Launch
rtx4090-4.16.96.320
14,000.0
tensor
4 $3.68 1.202 Launch
teslav100-3.64.256.320
14,000.0
pipeline
3 $3.89 1.934 Launch
h100nvl-1.16.96.160
14,000.0
1 $4.11 2.870 Launch
rtx5090-3.16.96.160
14,000.0
pipeline
3 $4.34 1.934 Launch
teslav100-4.32.96.160
14,000.0
tensor
4 $4.35 9.629 Launch
teslaa100-2.24.96.160.nvlink
14,000.0
tensor
2 $4.61 19.517 Launch
rtxa5000-8.24.256.160.nvlink
168,000.0
tensor
8 $4.61 1.963 Launch
teslaa100-2.24.128.160.nvlink
168,000.0
tensor
2 $4.67 1.626 Launch
h200-1.16.128.160
168,000.0
1 $4.74 1.270 Launch
rtx5090-4.16.128.160
14,000.0
tensor
4 $5.74 9.629 Launch
rtx4090-6.44.256.160
168,000.0
pipeline
6 $5.83 1.032 Launch
teslaa100-3.32.384.160
524,288.0
pipeline
3 $7.35 1.064 Launch
rtx4090-8.44.256.160
168,000.0
tensor
8 $7.51 1.963 Launch
h100-2.24.256.160
168,000.0
tensor
2 $7.84 1.626 Launch
h100nvl-2.24.192.240
168,000.0
tensor
2 $8.17 2.241 Launch
rtx5090-6.44.256.160
168,000.0
pipeline
6 $8.86 2.085 Launch
teslaa100-4.16.256.240
524,288.0
tensor
4 $9.14 1.607 Launch
h200-2.24.256.160
524,288.0
tensor
2 $9.40 1.379 Launch
rtx5090-8.44.256.160
524,288.0
tensor
8 $11.54 1.079 Launch
h100-3.32.384.160
524,288.0
pipeline
3 $11.72 1.064 Launch
h100nvl-3.24.384.480
524,288.0
pipeline
3 $12.38 1.359 Launch
h100-4.16.256.240
524,288.0
tensor
4 $14.96 1.607 Launch
h100nvl-4.32.384.480
524,288.0
tensor
4 $16.23 2.001 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.