Wan2.1-T2V-1.3B-Diffusers

This is a Text-to-Video model with 1.3 billion parameters, developed for generating video from text prompts. The model is optimized for consumer-grade GPUs: it requires 8.19 GB of VRAM, and generating a 5-second video in 480p resolution takes ~4 minutes on an RTX 4090 GPU without optimization.

Key Features:

  • High performance outperforms open-source and commercial solutions across metrics.
  • Support for multiple tasks:
    • Video generation from text (Text-to-Video),
    • Video generation from images (Image-to-Video),
    • Video editing, audio synthesis for video, and text-to-image generation,
    • Text generation — the first video model supporting Chinese and English text synthesis in video.
  • Efficient VAE — Wan-VAE enables encoding/decoding of Full HD video of any length while preserving temporal sequence, minimizing memory consumption.

Technical Details:
Generation:

  • Supports 480p and 720p resolutions (stability is lower for 720p due to limited training data).
  • Generation time: ~4 minutes per 5-second video (480p, without optimization).
  • Supports the Diffusers library (Hugging Face) for inference.
  • Single-GPU mode: works smoothly on RTX 4090 with the offload_model option.
  • Multi-GPU mode: scalable via FSDP + xDiT USP.

Prohibited Use: Generating content that violates laws, infringes on rights, or spreads misinformation. The model is intended for research and creative projects, balancing performance and accessibility.


The model is a component of the video generation pipeline, consisting of:

  • UMT5 text encoder: ~6B parameters,
  • Transformer: ~1.3B parameters,
  • VAE: ~127M parameters.

Total: ~7B parameters


Announce Date: 01.03.2025
Parameters: 1.3B
Context: 512
Developer: Alibaba Wan Team
License: Apache 2.0

Public endpoint

Use our pre-built public endpoints for free to test inference and explore Wan2.1-T2V-1.3B-Diffusers capabilities. You can obtain an API access token on the token management page after registration and verification.
Model Name Context Type GPU TPS Status Link
There are no public endpoints for this model yet.

Private server

Rent your own physically dedicated instance with hourly or long-term monthly billing.

We recommend deploying private instances in the following scenarios:

  • maximize endpoint performance,
  • enable full context for long sequences,
  • ensure top-tier security for data processing in an isolated, dedicated environment,
  • use custom weights, such as fine-tuned models or LoRA adapters.

Recommended configurations for hosting Wan2.1-T2V-1.3B-Diffusers

Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
512.0
16 16384 160 1 $0.33 Launch
rtx2080ti-1.10.16.500
512.0
10 16384 500 1 $0.38 Launch
teslaa2-1.16.32.160
512.0
16 32768 160 1 $0.38 Launch
teslaa10-1.16.32.160
512.0
16 32768 160 1 $0.53 Launch
rtx3080-1.16.32.160
512.0
16 32768 160 1 $0.57 Launch
rtx3090-1.16.24.160
512.0
16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160
512.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
512.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
512.0
16 65536 160 2 $1.23 Launch
rtx5090-1.16.64.160
512.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
512.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
512.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
512.0
16 131072 160 1 $6.98 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
512.0
16 16384 160 1 $0.33 Launch
rtx2080ti-1.10.16.500
512.0
10 16384 500 1 $0.38 Launch
teslaa2-1.16.32.160
512.0
16 32768 160 1 $0.38 Launch
teslaa10-1.16.32.160
512.0
16 32768 160 1 $0.53 Launch
rtx3080-1.16.32.160
512.0
16 32768 160 1 $0.57 Launch
rtx3090-1.16.24.160
512.0
16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160
512.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
512.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
512.0
16 65536 160 2 $1.23 Launch
rtx5090-1.16.64.160
512.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
512.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
512.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
512.0
16 131072 160 1 $6.98 Launch
Prices:
Name vCPU RAM, MB Disk, GB GPU Price, hour
teslat4-1.16.16.160
512.0
16 16384 160 1 $0.33 Launch
rtx2080ti-1.10.16.500
512.0
10 16384 500 1 $0.38 Launch
teslaa2-1.16.32.160
512.0
16 32768 160 1 $0.38 Launch
teslaa10-1.16.32.160
512.0
16 32768 160 1 $0.53 Launch
rtx3080-1.16.32.160
512.0
16 32768 160 1 $0.57 Launch
rtx3090-1.16.24.160
512.0
16 24576 160 1 $0.88 Launch
rtx4090-1.16.32.160
512.0
16 32768 160 1 $1.15 Launch
teslav100-1.12.64.160
512.0
12 65536 160 1 $1.20 Launch
rtxa5000-2.16.64.160.nvlink
512.0
16 65536 160 2 $1.23 Launch
rtx5090-1.16.64.160
512.0
16 65536 160 1 $1.59 Launch
teslaa100-1.16.64.160
512.0
16 65536 160 1 $2.58 Launch
teslah100-1.16.64.160
512.0
16 65536 160 1 $5.11 Launch
h200-1.16.128.160
512.0
16 131072 160 1 $6.98 Launch

Related models

Need help?

Contact our dedicated neural networks support team at nn@immers.cloud or send your request to the sales department at sale@immers.cloud.