Wan2.1-T2V-1.3B-Diffusers

This is a Text-to-Video model with 1.3 billion parameters, developed for generating video from text prompts. The model is optimized for consumer-grade GPUs: it requires 8.19 GB of VRAM, and generating a 5-second video in 480p resolution takes ~4 minutes on an RTX 4090 GPU without optimization.

Key Features:

  • High performance outperforms open-source and commercial solutions across metrics.
  • Support for multiple tasks:
    • Video generation from text (Text-to-Video),
    • Video generation from images (Image-to-Video),
    • Video editing, audio synthesis for video, and text-to-image generation,
    • Text generation — the first video model supporting Chinese and English text synthesis in video.
  • Efficient VAE — Wan-VAE enables encoding/decoding of Full HD video of any length while preserving temporal sequence, minimizing memory consumption.

Technical Details:
Generation:

  • Supports 480p and 720p resolutions (stability is lower for 720p due to limited training data).
  • Generation time: ~4 minutes per 5-second video (480p, without optimization).
  • Supports the Diffusers library (Hugging Face) for inference.
  • Single-GPU mode: works smoothly on RTX 4090 with the offload_model option.
  • Multi-GPU mode: scalable via FSDP + xDiT USP.

Prohibited Use: Generating content that violates laws, infringes on rights, or spreads misinformation. The model is intended for research and creative projects, balancing performance and accessibility.


The model is a component of the video generation pipeline, consisting of:

  • UMT5 text encoder: ~6B parameters,
  • Transformer: ~1.3B parameters,
  • VAE: ~127M parameters.

Total: ~7B parameters


Announce Date: 01.03.2025
Parameters: 2B
Context: 512
Developer: Alibaba Wan Team
Diffusers Version: 0.33.0.dev0
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Wan2.1-T2V-1.3B-Diffusers capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended server configurations for hosting Wan2.1-T2V-1.3B-Diffusers

Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
rtx2080ti-1.10.16.500 1 $0.38 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3080-1.16.32.160 1 $0.57 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch
Prices:
Name GPU Price, hour Generation time, sec.
teslat4-1.16.16.160 1 $0.33 Launch
rtx2080ti-1.10.16.500 1 $0.38 Launch
teslaa2-1.16.32.160 1 $0.38 Launch
teslaa10-1.16.32.160 1 $0.53 Launch
rtx3080-1.16.32.160 1 $0.57 Launch
rtx3090-1.16.24.160 1 $0.83 Launch
rtx4090-1.16.32.160 1 $1.02 Launch
teslav100-1.12.64.160 1 $1.20 Launch
rtx5090-1.16.64.160 1 $1.59 Launch
teslaa100-1.16.64.160 1 $2.37 Launch
h100-1.16.64.160 1 $3.83 Launch
h100nvl-1.16.96.160 1 $4.11 Launch
h200-1.16.128.160 1 $4.74 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.