Seedance 2.0
Experience the new generation of motion with Seedance 2.0—ByteDance's advanced AI video model. Turn images, text, and audio into cinematically stable video with director-level control over lighting, shadows, and camera movement. Try it on kiira.ai.

What you can create
Text-to-video
Describe a scene in natural language and get video with matching audio. The model understands multi-subject interactions, camera motion, and emotional tone. For dialogue, put lines in double quotes in your prompt—the model generates matching lip movement and sound.
Image-to-video
Animate a still by using it as the first frame. You can also set a last-frame image to control how the clip ends. The model keeps your input’s look and style while adding natural motion.
Multimodal reference
Combine images, video, and audio as references—for example, a video for motion style, images for character appearance, and audio for rhythm—then describe how to blend them. Powerful for outfit swaps, product showcases, and music-synced content.
Video editing
Provide a reference video and describe what to change—replace objects, change backgrounds, or shift style. The model preserves the original motion and camera work while editing.
Video extension
Provide a reference video and describe what happens next. The model continues the scene with consistent characters, environment, and style.
How to use
Pick a mode
Open the studio and choose text-to-video or image-to-video, then set aspect ratio, resolution, and duration.
Describe or upload
Write a clear prompt for motion and mood, or upload a start frame (and last frame if you need a controlled transition).
Generate & refine
Run generation, preview the result, tweak prompt or seed, and download when you are happy with the clip.