FLUX.2-klein-9B

FLUX.2-klein-9B is a 9 billion parameter rectified flow transformer model designed for high-speed image generation and editing. It unifies text-to-image generation and multi-reference image editing into a single compact architecture, achieving state-of-the-art quality with end-to-end inference in under half a second. The model leverages an 8 billion parameter Qwen3 text embedder and is step-distilled to 4 inference steps, enabling real-time performance while matching or exceeding the quality of models five times its size.

Key Features:

  • Supports both text-to-image generation and image-to-image editing with multi-reference inputs.
  • Optimized for real-time applications without sacrificing output quality. Achieves results in under half a second using 4 inference steps.
  • Fits in ~29GB VRAM, compatible with NVIDIA RTX 4090 or higher GPUs.

Limitations:

  • Non-commercial use only (licensed under FLUX Non-Commercial License).
  • May generate inaccurate or distorted text in outputs.
  • Susceptible to biases present in training data.
  • Output may not strictly align with prompts in all cases.

The model is a component of the image generation pipeline, consisting of:

  • Text encoder: ~8B parameters,
  • Transformer: ~9B parameters,
  • VAE: ~84M parameters.

Total: ~17B parameters


Announce Date: 14.01.2026
Parameters: 9B
Developer: Black Forest Labs
Diffusers Version: 0.37.0.dev0
vLLM-Omni Version: 0.14.0
License: FLUX Non-Commercial License v2.1

Public endpoint

Use our pre-built public endpoints for free to test inference and explore FLUX.2-klein-9B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting FLUX.2-klein-9B

Prices:
Name GPU Price, hour Generation time, sec.
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
rtx2080ti-1.10.16.500 1 $0.38 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3080-1.16.32.160 1 $0.57 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.