Krea Realtime 14B

The Krea Realtime 14B model is a distilled version of the Wan 2.1 14B model (developed by Wan-AI) for text-to-video generation tasks. It was transformed into an autoregressive model using the Self-Forcing method, achieving an inference speed of 11 frames per second with 4 inference steps on a single NVIDIA B200 GPU.

Stabilization Technologies:

  • KV Cache Recomputation and KV Cache Attention Bias — methods to reduce error accumulation during generation.
  • Memory optimizations tailored for autoregressive models, simplifying the training of large architectures.

Real-Time Capabilities:

  • Video generation with a time to first frame under 1 second.
  • Ability to modify prompts during generation and dynamically change the video style (restyle).
  • Support for video-to-video:
  • Processing input videos, webcam streams, or canvas for controlled synthesis and editing.
  • Text-to-video generation in streaming mode.

Technical Details:

  • Model size: Exceeds existing real-time models by over 10 times.
  • Inference: Implemented using the Diffusers library (Modular Diffusers module). Requires components from the Wan-AI/Wan2.1-T2V-1.3B repository and specific settings (e.g., torch.bfloat16 for memory optimization).
  • Infrastructure: Supports CUDA and requires installation of dependencies such as ffmpeg and flash_attn.
  • Usage:
    • Launch via a web interface (clone the krea-ai/realtime-video repository and follow setup steps).
    • Integration with the Diffusers library for video generation via API, including parameter configuration (number of blocks, frames, seed, etc.).
    • The model is available on Hugging Face, with inference code and additional instructions in the GitHub repository.

The model is a component of the video generation pipeline, consisting of:

  • Transformer: ~14B parameters

*for inference, Wan-AI/Wan2.1-T2V-1.3B is also required

  • UMT5 text encoder: ~6B parameters,
  • Transformer: ~1.3B parameters,
  • VAE: ~127M parameters.

Total: ~7B parameters


Announce Date: 20.10.2025
Parameters: 14B
Developer: krea
Diffusers Version: 0.36.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Krea Realtime 14B capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Krea Realtime 14B

Prices:
Name GPU Price, hour Generation time, sec.
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
rtx2080ti-1.10.16.500 1 $0.38 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3080-1.16.32.160 1 $0.57 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.