FLUX.2-klein-4B

It is a 4 billion parameter rectified flow transformer model designed for fast image generation and editing. It unifies text-to-image generation and multi-reference image editing into a single compact architecture, enabling end-to-end inference in under a second. Optimized for real-time applications without compromising quality, it runs on consumer-grade GPUs such as NVIDIA RTX 3090/4070 with approximately 13GB VRAM.

Key features:

  • It supports text-to-image synthesis and image-to-image editing with a unified model architecture.
  • Sub-second image generation for interactive workflows, production deployments, and latency-sensitive applications.
  • The model is fully open-source under the Apache 2.0 license, allowing commercial use.

Limitations:

  • It does not provide factual information and may not always match input prompts precisely.
  • The model may generate inaccurate text or amplify biases from training data.
  • The model’s development prioritizes safety, with collaboration on data filtering (e.g., IWF) and third-party tools (e.g., TheHive.ai) to address risks. 

The model is a component of the image generation pipeline, consisting of:

  • Text encoder: ~4B parameters,
  • Transformer: ~4B parameters,
  • VAE: ~84M parameters.

Total: ~8B parameters


Announce Date: 14.01.2026
Parameters: 4B
Developer: Black Forest Labs
Diffusers Version: 0.37.0.dev0
vLLM-Omni Version: 0.14.0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore FLUX.2-klein-4B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting FLUX.2-klein-4B

Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
rtx2080ti-1.10.16.500 1 $0.38 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3080-1.16.32.160 1 $0.57 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
rtx2080ti-1.10.16.500 1 $0.38 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3080-1.16.32.160 1 $0.57 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.