Seedance 2.0 API — ByteDance Video Generation
Cinematic video + native audio via REST API. Free credits. No BytePlus account.
Seedance 2.0 — built for production video workflows
Seedance 2.0
ByteDance multimodal video model
Generate cinematic AI videos with Seedance 2.0 — ByteDance's flagship multimodal video generation model. Supports text, image, audio, and video as references.
Multi-Reference
Combine images, audio, and video inputs
Use multiple reference images and clips to guide subject, style, and motion in a single call. Available at /models/byteplus/seedance-20-multi-reference-to-video.
Start/End Frame Control
Precise narrative control
Pin the opening and closing frames of a clip. Seedance 2.0 generates the motion between them — perfect for animated transitions and narrative beats. Available at /models/byteplus/seedance-20-start-end-frame-image-to-video.
Native Audio Sync
Audio + video in one call
First industry model with unified audio-video joint generation. Sound effects, ambient audio, and dialogue timing aligned to the generated footage.
One REST API
No BytePlus account required
Call Seedance via the same ModelsLab REST API you use for Kling, Wan, and Runway Aleph. One key, one bill, no BytePlus signup or Volcano Engine quota management.
Pricing
From $0.06 per clip
Pay-per-generation pricing starting around $0.06–$0.60 per 5-second 720p clip depending on variant. Every new ModelsLab account gets free credits to test Seedance.
Webhooks
Async callbacks for long renders
Pass a webhook URL and ModelsLab POSTs the finished MP4 to your endpoint when generation completes. Essential for batch and production pipelines.
Python & JavaScript
Full SDK coverage
Official Python and TypeScript SDKs wrap the Seedance endpoints. REST + OpenAPI spec for autogenerated clients in any language.
Examples
See what Seedance 2.0 can create
Copy any prompt below and try it yourself in the playground.
Cinematic product reveal
“a sleek smartphone rotating on a minimalist white pedestal, studio lighting, shallow depth of field, cinematic, 1080p”
Multi-reference character scene
“the character from reference image 1 walking through the environment from reference image 2, golden hour lighting”
Start/end frame animation
“morph from the frame-1 pose to the frame-2 pose with natural physics and fluid motion”
Native audio video
“a waterfall crashing into a pool with natural ambient sound and reverberant splash audio”
For Developers
A few lines of code.
One endpoint. Multi-reference input. Cinematic output.
ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.
- Serverless: scales to zero, scales to millions
- Pay per clip, no minimums
- Python and JavaScript SDKs, plus REST API
import requestsresponse = requests.post("https://modelslab.com/api/v7/video-fusion/text-to-video",json={"key": "YOUR_API_KEY","width": 1920,"height": 1080,"prompt": "a dancer performing fluid contemporary moves in a sunlit studio, cinematic camera pan, 1080p","model_id": "seedance-20-multi-reference-to-video","num_frames": 120,"output_type": "mp4","negative_prompt": "blurry, distorted"})print(response.json())
Ready to create?
Start generating with Seedance 2.0 API — ByteDance Video Generation on ModelsLab.