Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · ai

Seedance 2.0 API — ByteDance Video GenerationCinematic video + native audio via REST API. Free credits. No BytePlus account.

Seedance 2.0 — built for production video workflows

Seedance 2.0

ByteDance multimodal video model

Generate cinematic AI videos with Seedance 2.0 — ByteDance's flagship multimodal video generation model. Supports text, image, audio, and video as references.

Multi-Reference

Combine images, audio, and video inputs

Use multiple reference images and clips to guide subject, style, and motion in a single call. Available at /models/byteplus/seedance-20-multi-reference-to-video.

Start/End Frame Control

Precise narrative control

Pin the opening and closing frames of a clip. Seedance 2.0 generates the motion between them — perfect for animated transitions and narrative beats. Available at /models/byteplus/seedance-20-start-end-frame-image-to-video.

Native Audio Sync

Audio + video in one call

First industry model with unified audio-video joint generation. Sound effects, ambient audio, and dialogue timing aligned to the generated footage.

One REST API

No BytePlus account required

Call Seedance via the same ModelsLab REST API you use for Kling, Wan, and Runway Aleph. One key, one bill, no BytePlus signup or Volcano Engine quota management.

Pricing

From $0.06 per clip

Pay-per-generation pricing starting around $0.06–$0.60 per 5-second 720p clip depending on variant. Every new ModelsLab account gets free credits to test Seedance.

Webhooks

Async callbacks for long renders

Pass a webhook URL and ModelsLab POSTs the finished MP4 to your endpoint when generation completes. Essential for batch and production pipelines.

Python & JavaScript

Full SDK coverage

Official Python and TypeScript SDKs wrap the Seedance endpoints. REST + OpenAPI spec for autogenerated clients in any language.

Examples

See what Seedance 2.0 can create

Copy any prompt below and try it yourself in the playground.

Cinematic product reveal

a sleek smartphone rotating on a minimalist white pedestal, studio lighting, shallow depth of field, cinematic, 1080p

Multi-reference character scene

the character from reference image 1 walking through the environment from reference image 2, golden hour lighting

Start/end frame animation

morph from the frame-1 pose to the frame-2 pose with natural physics and fluid motion

Native audio video

a waterfall crashing into a pool with natural ambient sound and reverberant splash audio

For Developers

A few lines of code.
One endpoint. Multi-reference input. Cinematic output.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per clip, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/text-to-video",
json={
"key": "YOUR_API_KEY",
"width": 1920,
"height": 1080,
"prompt": "a dancer performing fluid contemporary moves in a sunlit studio, cinematic camera pan, 1080p",
"model_id": "seedance-20-multi-reference-to-video",
"num_frames": 120,
"output_type": "mp4",
"negative_prompt": "blurry, distorted"
}
)
print(response.json())

FAQ

Common questions about Seedance 2.0 API — ByteDance Video Generation

Read the docs

Seedance 2.0 is ByteDance's flagship multimodal video generation API. It accepts text, image, audio, and video inputs and produces cinematic 1080p video clips with synchronized native audio. On ModelsLab you access Seedance 2.0 via a single REST endpoint — no BytePlus or Volcano Engine account required.

Seedance 2.0 on ModelsLab is pay-per-generation starting around $0.06–$0.60 per 5-second 720p clip depending on variant (multi-reference vs start/end frame). Transparent per-call pricing — no monthly commitments, no credit systems.

Yes. Every new ModelsLab account gets free credits on signup — no credit card required. Use them to test Seedance 2.0 text-to-video, image-to-video, and multi-reference variants before picking a paid plan.

Create a free ModelsLab account at modelslab.com and generate your API key from the dashboard. The same key works for every Seedance variant (1.0 Pro, 1.5 Pro, 2.0) plus every other video model (Kling, Wan, Runway Aleph).

Endpoint reference, parameter schemas, request/response examples, and SDK snippets live at docs.modelslab.com. Each Seedance variant has its own model page at /models/byteplus/ with direct-copy code samples.

Seedance 2.0 adds multimodal input (text + image + audio + video references in a single call), native audio-video joint generation, and tighter motion coherence across longer clips. Seedance 1.5 Pro is text/image-to-video only and cheaper per clip, which makes it the right pick for high-volume basic workflows.

Yes. The "start/end frame" variant takes a first-frame and last-frame image and animates the transition between them. The "multi-reference" variant accepts multiple image references to control subject, style, and scene.

Each API call returns up to 5-10 seconds of 720p or 1080p video. For longer sequences, chain multiple calls at scene boundaries and stitch the clips together — a common pattern for 30-60s social videos.

Yes. Pass a webhook URL in the request body and ModelsLab POSTs the completed MP4 URL to your endpoint when generation finishes. Essential for long-running generations and high-throughput batch jobs.

Seedance 2.0 wins on multimodal input and native audio. Kling 3.0 leads cinematic motion realism. Runway Aleph is purpose-built for video-to-video editing (not generation). Wan 2.7 is cheapest for bulk text-to-video. All four are available under one ModelsLab API key — route between them per use case.

The official ModelsLab Python and TypeScript SDKs cover every Seedance endpoint. Community GitHub wrappers also exist (Anil-matcha/Seedance-2) but the official SDK is recommended for production — it handles retries, webhooks, and async polling out of the box.

Ready to create?

Start generating with Seedance 2.0 API — ByteDance Video Generation on ModelsLab.