Seed-OSS-36B

reasoning

Seed-OSS-36B is an innovative open large language model developed by ByteDance's Seed team to address a broad range of AI tasks requiring high reasoning power, long-context processing, and flexible agent-based architectures. Built upon the classic Transformer architecture, it incorporates state-of-the-art techniques: Rotary Position Embedding (RoPE) for adaptive handling of extremely long sequences (up to 512,000 tokens), Grouped Query Attention (GQA) for faster and more stable processing, RMSNorm normalization, and SwiGLU activation for high-quality training and generation. The model comprises 64 hidden layers, 36 billion parameters, 80 attention heads (with 8 KV heads), and a head dimension of 128. This configuration enables efficient performance across both text generation and complex reasoning, programming, and agent interaction tasks.A key feature of the Seed-OSS approach is native support for ultra-long context without requiring additional techniques—making it particularly valuable for processing large documents, files, and extended dialogues.

Seed-OSS-36B achieves top-tier results on major global benchmarks within its class, leading in benchmarks such as LiveCodeBench, MMLU, AIME24/25, GPQA-D, and RULER. It also demonstrates state-of-the-art instruction-following capabilities on IFEval and excels in agent-oriented tasks like TAU1-Retail and SWE-Bench Verified.

A distinctive feature of Seed-OSS-36B-Instruct is its flexible "thinking budget" system: users can define reasoning limits to achieve an optimal balance between speed and depth of reasoning chains. This is especially important for optimizing the model across diverse practical applications—from fast chatbots to deep analytical use cases in business, science, and education.Designed for international deployment, the model supports multilingual capabilities and, importantly, is released under the open Apache-2.0 license. Technically, Seed-OSS-36B is compatible with standard Transformers libraries as well as modern inference frameworks such as vLLM API, ensuring seamless integration into industrial applications, cloud services, and research projects.


Announce Date: 20.08.2025
Parameters: 37B
Context: 525K
Layers: 64
Attention Type: Full Attention
Developer: ByteDance-Seed
Transformers Version: 4.55.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Seed-OSS-36B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Seed-OSS-36B

Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-2.16.32.160
14,000.0
tensor
2 $0.54 0.054 Launch
teslaa2-2.16.32.160
14,000.0
tensor
2 $0.57 0.054 Launch
rtx2080ti-3.12.24.120
14,000.0
pipeline
3 $0.84 0.042 Launch
teslaa10-2.16.64.160
14,000.0
tensor
2 $0.93 0.167 Launch
rtx2080ti-4.16.32.160
14,000.0
tensor
4 $1.12 0.100 Launch
teslav100-1.12.64.160
14,000.0
1 $1.20 0.074 Launch
rtxa5000-2.16.64.160.nvlink
14,000.0
tensor
2 $1.23 0.167 Launch
rtx3080-3.16.64.160
14,000.0
pipeline
3 $1.43 0.021 Launch
rtx3090-2.16.64.160
14,000.0
tensor
2 $1.56 0.167 Launch
rtx5090-1.16.64.160
14,000.0
1 $1.59 0.074 Launch
teslaa10-4.16.64.160
168,000.0
tensor
4 $1.62 0.465 Launch
teslaa2-6.32.128.160
168,000.0
pipeline
6 $1.65 0.426 Launch
rtx3080-4.16.64.160
14,000.0
tensor
4 $1.82 0.071 Launch
rtx4090-2.16.64.160
14,000.0
tensor
2 $1.92 0.167 Launch
rtxa5000-4.16.128.160.nvlink
168,000.0
tensor
4 $2.34 0.465 Launch
teslaa100-1.16.64.160
168,000.0
1 $2.37 0.411 Launch
rtx3090-4.16.64.160
168,000.0
tensor
4 $2.89 0.465 Launch
rtx4090-4.16.64.160
168,000.0
tensor
4 $3.60 0.465 Launch
h100-1.16.64.160
168,000.0
1 $3.83 0.411 Launch
teslav100-3.64.256.320
168,000.0
pipeline
3 $3.89 0.485 Launch
h100nvl-1.16.96.160
168,000.0
1 $4.11 0.510 Launch
teslav100-4.32.64.160
168,000.0
tensor
4 $4.28 0.690 Launch
rtx5090-3.16.96.160
168,000.0
pipeline
3 $4.34 0.485 Launch
rtxa5000-8.24.256.160.nvlink
524,288.0
tensor
8 $4.61 1.062 Launch
h200-1.16.128.160
168,000.0
1 $4.74 0.840 Launch
rtx5090-4.16.128.160
168,000.0
tensor
4 $5.74 0.690 Launch
teslaa100-3.32.384.160
524,288.0
pipeline
3 $7.35 1.497 Launch
rtx4090-8.44.256.160
524,288.0
tensor
8 $7.51 1.062 Launch
h100nvl-2.24.192.240
524,288.0
tensor
2 $8.17 1.151 Launch
rtx5090-6.44.256.160
524,288.0
pipeline
6 $8.86 1.101 Launch
teslaa100-4.16.256.120
524,288.0
tensor
4 $9.13 2.040 Launch
h200-2.24.256.160
524,288.0
tensor
2 $9.40 1.812 Launch
rtx5090-8.44.256.160
524,288.0
tensor
8 $11.54 1.512 Launch
h100-3.32.384.160
524,288.0
pipeline
3 $11.72 1.497 Launch
h100-4.16.256.120
524,288.0
tensor
4 $14.95 2.040 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslat4-4.16.64.160
14,000.0
tensor
4 $0.96 0.074 Launch
teslaa2-4.32.128.160
14,000.0
tensor
4 $1.26 0.074 Launch
teslaa10-3.16.96.160
14,000.0
pipeline
3 $1.34 0.150 Launch
teslaa10-4.12.48.160
14,000.0
tensor
4 $1.57 0.299 Launch
teslav100-2.16.64.240
14,000.0
tensor
2 $2.22 0.113 Launch
rtx3090-3.16.96.160
14,000.0
pipeline
3 $2.29 0.150 Launch
rtxa5000-4.16.128.160.nvlink
14,000.0
tensor
4 $2.34 0.299 Launch
teslaa100-1.16.64.160
14,000.0
1 $2.37 0.245 Launch
rtx4090-3.16.96.160
14,000.0
pipeline
3 $2.83 0.150 Launch
rtx3090-4.16.64.160
14,000.0
tensor
4 $2.89 0.299 Launch
rtx5090-2.16.64.160
14,000.0
tensor
2 $2.93 0.113 Launch
rtxa5000-6.24.192.160.nvlink
168,000.0
pipeline
6 $3.50 0.597 Launch
rtx4090-4.16.64.160
14,000.0
tensor
4 $3.60 0.299 Launch
h100-1.16.64.160
14,000.0
1 $3.83 0.245 Launch
teslav100-3.64.256.320
168,000.0
pipeline
3 $3.89 0.318 Launch
h100nvl-1.16.96.160
168,000.0
1 $4.11 0.343 Launch
rtx5090-3.16.96.160
168,000.0
pipeline
3 $4.34 0.318 Launch
teslav100-4.32.96.160
168,000.0
tensor
4 $4.35 0.524 Launch
teslaa100-2.24.96.160.nvlink
168,000.0
tensor
2 $4.61 0.788 Launch
rtxa5000-8.24.256.160.nvlink
168,000.0
tensor
8 $4.61 0.896 Launch
h200-1.16.128.160
168,000.0
1 $4.74 0.674 Launch
rtx5090-4.16.128.160
168,000.0
tensor
4 $5.74 0.524 Launch
rtx4090-6.44.256.160
168,000.0
pipeline
6 $5.83 0.597 Launch
teslaa100-3.32.384.160
524,288.0
pipeline
3 $7.35 1.331 Launch
rtx4090-8.44.256.160
168,000.0
tensor
8 $7.51 0.896 Launch
h100-2.24.256.160
168,000.0
tensor
2 $7.84 0.788 Launch
teslaa100-4.16.256.120
524,288.0
tensor
4 $9.13 1.874 Launch
h200-2.24.256.160
524,288.0
tensor
2 $9.40 1.646 Launch
rtx5090-8.44.256.160
524,288.0
tensor
8 $11.54 1.346 Launch
h100-3.32.384.160
524,288.0
pipeline
3 $11.72 1.331 Launch
h100nvl-3.24.384.480
524,288.0
pipeline
3 $12.38 1.626 Launch
h100-4.16.256.120
524,288.0
tensor
4 $14.95 1.874 Launch
h100nvl-4.32.384.480
524,288.0
tensor
4 $16.23 2.267 Launch
Prices:
Name GPU Price, hour TPS Max Concurrency
teslaa10-4.16.128.160
14,000.0
tensor
4 $1.75 0.032 Launch
rtxa5000-4.16.128.160.nvlink
14,000.0
tensor
4 $2.34 0.032 Launch
rtx3090-4.16.96.320
14,000.0
tensor
4 $2.97 0.032 Launch
rtxa5000-6.24.192.160.nvlink
168,000.0
pipeline
6 $3.50 0.331 Launch
rtx4090-4.16.96.320
14,000.0
tensor
4 $3.68 0.032 Launch
teslav100-3.64.256.320
14,000.0
pipeline
3 $3.89 0.052 Launch
h100nvl-1.16.96.160
14,000.0
1 $4.11 0.077 Launch
rtx5090-3.16.96.160
14,000.0
pipeline
3 $4.34 0.052 Launch
teslav100-4.32.96.160
14,000.0
tensor
4 $4.35 0.257 Launch
teslaa100-2.24.96.160.nvlink
14,000.0
tensor
2 $4.61 0.521 Launch
rtxa5000-8.24.256.160.nvlink
168,000.0
tensor
8 $4.61 0.629 Launch
teslaa100-2.24.128.160.nvlink
168,000.0
tensor
2 $4.67 0.521 Launch
h200-1.16.128.160
168,000.0
1 $4.74 0.407 Launch
rtx5090-4.16.128.160
14,000.0
tensor
4 $5.74 0.257 Launch
rtx4090-6.44.256.160
168,000.0
pipeline
6 $5.83 0.331 Launch
teslaa100-3.32.384.160
524,288.0
pipeline
3 $7.35 1.064 Launch
rtx4090-8.44.256.160
168,000.0
tensor
8 $7.51 0.629 Launch
h100-2.24.256.160
168,000.0
tensor
2 $7.84 0.521 Launch
h100nvl-2.24.192.240
168,000.0
tensor
2 $8.17 0.718 Launch
rtx5090-6.44.256.160
168,000.0
pipeline
6 $8.86 0.668 Launch
teslaa100-4.16.256.240
524,288.0
tensor
4 $9.14 1.607 Launch
h200-2.24.256.160
524,288.0
tensor
2 $9.40 1.379 Launch
rtx5090-8.44.256.160
524,288.0
tensor
8 $11.54 1.079 Launch
h100-3.32.384.160
524,288.0
pipeline
3 $11.72 1.064 Launch
h100nvl-3.24.384.480
524,288.0
pipeline
3 $12.38 1.359 Launch
h100-4.16.256.240
524,288.0
tensor
4 $14.96 1.607 Launch
h100nvl-4.32.384.480
524,288.0
tensor
4 $16.23 2.001 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.