Wan Fast AI video model

Generate Wan Fast videos from a prompt or a reference image

Wan Fast is the flexible video model in PerchanceAI for quick text-to-video runs and optional image-to-video workflows. Use it when you want a fast clip with audio included by default.

Prompt-first workflow, optional reference image, audio always included

Social promosProduct teasersConcept previewsImage-to-video tests

Start with a prompt

Prompt

Night streetwear campaign in Tokyo, slow camera push in, neon reflections on wet pavement, confident model walk, cinematic lighting, natural motion

Text to videoOptional image inputAudio included

What you get

A fast motion clip that moves from idea to output without extra setup

Example Wan Fast prompt

Night streetwear campaign in Tokyo, slow camera push in, neon reflections on wet pavement, confident model walk, cinematic lighting, natural motion

Create short AI videos from text prompts without opening editing software

Add a reference image when you want closer visual guidance for motion

Built-in audio stays on, so you do not have to manage a separate setting

2-15s

supported duration range

Audio on

for every generation

Text or image

as your starting input

Fast path to value

1

Write the scene prompt or upload a reference image

2

Choose duration, aspect ratio, and output resolution

3

Generate a Wan Fast video clip with audio included

Open the generator, start with a direct scene prompt, then add a reference image when you want tighter visual control.

Use cases

Best when you need flexible video generation without a heavy setup

Wan Fast is the broadest-fit model in the current stack, so it works well when you need to move quickly between text-led ideas and image-led motion tests.

Social promos

Turn short prompts into fast clips for reels, shorts, launch posts, and lightweight ad tests.

Product teasers

Use a prompt alone or combine it with a product image when you want more visual control over the output.

Concept previews

Generate quick motion references when you need a rough creative direction before deeper production work.

Image-to-video tests

Upload a reference image and add motion without switching to a separate image-only model right away.

How it works

Go from prompt to Wan Fast video in three direct steps

The workflow is short by design: describe the clip, set the output direction, and generate a fast result.

Describe the scene

Start with a clear text prompt, or add a reference image when you want the model to stay closer to an existing visual.

Set the output

Choose duration, resolution, and aspect ratio based on where the clip will be used.

Generate video

Render a short Wan Fast clip with audio included so you can review and iterate quickly.

Features

What Wan Fast supports in the live generator

This page stays aligned with the actual integration, so the selling points are the model behaviors you can use right now.

Text to video

Start from a written prompt and generate a short video scene without needing source footage.

Optional image input

Add a reference image when you want the output to stay closer to a product shot, portrait, or existing frame.

Audio always included

Wan Fast renders with audio enabled in this integration, so there is no separate audio toggle to manage.

2 to 15 second clips

Use shorter tests for quick iterations or longer runs when you need a more complete scene.

FAQ

Questions before you generate

What is Wan Fast AI Video Generator?

+

Wan Fast AI Video Generator is the PerchanceAI video workflow for generating short clips from text prompts, with optional reference image input and audio included by default.

Does Wan Fast support both text to video and image to video?

+

Yes. Wan Fast can generate from a prompt alone, and it also accepts a reference image when you want closer visual guidance.

Is audio optional in Wan Fast?

+

No. In the current integration, audio is always included for Wan Fast generations.

How long can Wan Fast videos be?

+

Wan Fast supports clips from 2 seconds up to 15 seconds in the current generator.

When should I use Wan Fast instead of the Wan 2.2 image-to-video models?

+

Use Wan Fast when you want the most flexible workflow, especially for prompt-led generation or quick image-to-video tests. Use the Wan 2.2 image-to-video models when you specifically want a reference-image-first flow.

Where do I generate Wan Fast videos?

+

Use the main AI video generator and select Wan Fast as the model before generating your clip.

Explore related pages

AI Video Generator

Open the live generator and choose Wan Fast in the model selector.

LTX 2 AI Video Generator

Compare Wan Fast with the text-only LTX 2 workflow.

Wan 2.2 Image to Video

Explore the dedicated image-to-video Wan 2.2 landing page.

Flexible Wan Fast workflow

Open Wan Fast and generate your next video clip

If you want one model that can start from a prompt or a reference image, Wan Fast is the fastest path in the current video stack.

Supports prompt-first creation, optional image input, and built-in audio