FLUX.2-dev

A 32 billion parameter rectified flow transformer designed for image generation, editing, and combination based on text instructions. It supports open-ended tasks such as text-to-image generation, single-reference editing, and multi-reference editing without requiring additional finetuning. Trained using guidance distillation to enhance efficiency, the model is optimized for research and creative applications under a non-commercial license.

Key Features:

  • State-of-the-art performance in text-to-image generation, single-reference editing, and multi-reference editing.
  • No finetuning needed for character, object, or style references. Read more in the guide.
  • Trained with guidance distillation to improve computational efficiency.
  • Open weights for scientific research and artistic workflow development.
  • Outputs can be used for personal, scientific, or commercial purposes under the FLUX [dev] Non-Commercial License.
  • Deployable via the `diffusers` library with quantization support (e.g., 4-bit) for consumer GPUs like RTX 4090/5090.
  • Pre-training data filtered for NSFW content and CSAM using partnerships with the Internet Watch Foundation. Post-training safety fine-tuning to suppress unlawful content generation (e.g., CSAM, NCII). Third-party evaluations and iterative fine-tuning ensured resilience against adversarial prompts and reference images. Inference filters for NSFW/IP-infringing content, pixel-layer watermarking, and C2PA metadata integration for provenance.
  • Licensing terms prohibit unlawful use, with policies enforced via developer agreements and monitoring. Commercial use requires separate licensing

The model is a component of the image generation pipeline, consisting of:

  • Text encoder: ~24B parameters,
  • Transformer: ~32B parameters,
  • VAE: ~84M parameters.

Total: ~56B parameters


Announce Date: 18.11.2025
Parameters: 32B
Context: 512
VRAM requirements: 14.4 GB using 4 bits quantization, 28.8 GB using 8 bits quantization, 57.7 GB using 16 bits quantization
Developer: Black Forest Labs
License: FLUX [dev] Non-Commercial License v2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore FLUX.2-dev capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting FLUX.2-dev

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslaa100-1.16.64.160 16 65536 160 1 $2.37 Launch
teslah100-1.16.64.160 16 65536 160 1 $3.83 Launch
h200-1.16.128.160 16 131072 160 1 $4.74 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.37 Launch
teslah100-1.16.64.160 16 65536 160 1 $3.83 Launch
h200-1.16.128.160 16 131072 160 1 $4.74 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160 16 16384 160 1 $0.33 Launch
teslaa2-1.16.32.160 16 32768 160 1 $0.38 Launch
teslaa10-1.16.32.160 16 32768 160 1 $0.53 Launch
rtx3090-1.16.24.160 16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160 16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160 12 65536 160 1 $1.20 Launch
rtx5090-1.16.64.160 16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160 16 65536 160 1 $2.37 Launch
teslah100-1.16.64.160 16 65536 160 1 $3.83 Launch
h200-1.16.128.160 16 131072 160 1 $4.74 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.