What Is Seedance 2.0?
Seedance 2.0 is ByteDance's production-grade AI video generation model, released in February 2026. It generates up to 20-second video clips with realistic physics simulation, native multi-shot storytelling, and outputs up to 1080p resolution. It's one of the first models to handle motion coherence across scene cuts without post-processing.
What sets Seedance 2.0 apart from earlier video generation models:
- 20-second clips — significantly longer than the 4-5 second clips most models generate
- Realistic physics — objects fall, collide, and deform with physically accurate motion
- Multi-shot storytelling — maintains character and scene consistency across cuts
- Native audio sync — generated motion syncs to provided audio tracks
- 1080p output — production-ready resolution without upscaling
ModelsLab exposes Seedance 2.0 through two API endpoints: text-to-video (seedance-t2v) and image-to-video (seedance-i2v), both available via the BytePlus model family.
Getting Started: Authentication
All ModelsLab API calls use your API key in the Authorization header. Sign up at modelslab.com to get your key, then use it across all requests.
Authorization: Bearer YOUR_MODELSLAB_API_KEY
Content-Type: application/json
The base URL for all video generation requests is:
POST https://modelslab.com/api/v6/video/text2video
Seedance 2.0 Text-to-Video API
The text-to-video endpoint generates a video from a text prompt. Set model_id to seedance-t2v to route your request through the Seedance 2.0 model.
Python Example
import requests
import time
API_KEY = "your_modelslab_api_key"
def generate_seedance_video(prompt: str, duration: int = 10) -> str:
"""Generate a video using Seedance 2.0 text-to-video."""
url = "https://modelslab.com/api/v6/video/text2video"
payload = {
"key": API_KEY,
"model_id": "seedance-t2v",
"prompt": prompt,
"negative_prompt": "low quality, blurry, distorted, artifacts",
"height": 720,
"width": 1280,
"num_frames": duration * 24, # 24fps
"num_inference_steps": 30,
"guidance_scale": 7.5,
"output_type": "mp4",
"webhook": None,
"track_id": None
}
response = requests.post(url, json=payload)
data = response.json()
if data.get("status") == "processing":
# Poll for result
fetch_url = data.get("fetch_result")
return poll_for_result(fetch_url)
elif data.get("status") == "success":
return data["output"][0]
else:
raise Exception(f"Generation failed: {data}")
def poll_for_result(fetch_url: str, max_retries: int = 30) -> str:
"""Poll the fetch URL until video is ready."""
for attempt in range(max_retries):
time.sleep(10)
response = requests.post(fetch_url, json={"key": API_KEY})
data = response.json()
if data.get("status") == "success":
return data["output"][0]
elif data.get("status") == "error":
raise Exception(f"Error: {data.get('message')}")
print(f"Attempt {attempt + 1}: still processing...")
raise TimeoutError("Video generation timed out after 5 minutes")
# Example usage
video_url = generate_seedance_video(
prompt="A futuristic city at sunset, flying cars weaving between glass towers, cinematic wide shot",
duration=10
)
print(f"Video ready: {video_url}")
cURL Example
curl -X POST "https://modelslab.com/api/v6/video/text2video" \
-H "Content-Type: application/json" \
-d '{
"key": "YOUR_API_KEY",
"model_id": "seedance-t2v",
"prompt": "A timelapse of a city waking up at dawn, golden light, wide aerial shot",
"negative_prompt": "low quality, blurry",
"height": 720,
"width": 1280,
"num_inference_steps": 30,
"output_type": "mp4"
}'
Seedance 2.0 Image-to-Video API
The image-to-video endpoint animates a still image into a video. Use model_id seedance-i2v. This is particularly effective for product shots, architectural visualizations, and portrait animations.
Python Example
import requests
import base64
API_KEY = "your_modelslab_api_key"
def image_to_video(image_path: str, prompt: str) -> str:
"""Animate an image using Seedance 2.0 image-to-video."""
url = "https://modelslab.com/api/v6/video/img2video"
# Read and encode image
with open(image_path, "rb") as f:
image_data = base64.b64encode(f.read()).decode()
payload = {
"key": API_KEY,
"model_id": "seedance-i2v",
"prompt": prompt,
"init_image": f"data:image/jpeg;base64,{image_data}",
"height": 720,
"width": 1280,
"num_inference_steps": 30,
"guidance_scale": 7.5,
"output_type": "mp4"
}
response = requests.post(url, json=payload)
data = response.json()
if data["status"] == "processing":
return poll_for_result(data["fetch_result"])
return data["output"][0]
# Animate a product photo
video = image_to_video(
"product_photo.jpg",
"The product rotates slowly, studio lighting, professional reveal"
)
Node.js Integration
const axios = require('axios');
const API_KEY = 'your_modelslab_api_key';
async function generateVideo(prompt) {
const response = await axios.post(
'https://modelslab.com/api/v6/video/text2video',
{
key: API_KEY,
model_id: 'seedance-t2v',
prompt: prompt,
negative_prompt: 'low quality, blurry, distorted',
height: 720,
width: 1280,
num_inference_steps: 30,
output_type: 'mp4'
}
);
const { status, output, fetch_result } = response.data;
if (status === 'success') {
return output[0];
} else if (status === 'processing') {
return await pollResult(fetch_result);
}
}
async function pollResult(fetchUrl, retries = 20) {
for (let i = 0; i < retries; i++) {
await new Promise(r => setTimeout(r, 10000)); // wait 10s
const res = await axios.post(fetchUrl, { key: API_KEY });
if (res.data.status === 'success') return res.data.output[0];
}
throw new Error('Timeout waiting for video');
}
generateVideo('A mountain lake at sunrise, perfect reflection, 4K cinematic')
.then(url => console.log('Video:', url))
.catch(console.error);
Key API Parameters
Common Parameters for Both Endpoints
- model_id:
seedance-t2v(text-to-video) orseedance-i2v(image-to-video) - prompt: Detailed description of the video content. More detail = better results.
- negative_prompt: What to avoid (e.g., "low quality, blurry, artifacts, shaky")
- height / width: Output resolution. Recommended: 720x1280 (landscape) or 1080x1920 (portrait)
- num_inference_steps: 20-30 for speed, 40-50 for maximum quality
- guidance_scale: 7-8 for balanced results. Higher values follow the prompt more strictly.
- output_type:
mp4(recommended) orgif - webhook: Optional URL for async notification when generation completes
Prompt Engineering for Seedance 2.0
Seedance 2.0's physics engine responds well to specific motion descriptions. Here are prompt patterns that consistently produce strong results:
Effective Prompt Patterns
- Camera movement: "slow pan left", "tracking shot", "aerial drone view", "zoom out from"
- Physics specifics: "water splashing", "leaves falling", "fabric rippling in wind"
- Lighting cues: "golden hour light", "neon reflections on wet pavement", "volumetric fog"
- Style qualifiers: "cinematic", "8K", "film grain", "studio lighting", "documentary style"
Prompt Templates
# Landscape/Nature
"[scene], [time of day], [weather], [camera movement], cinematic, 4K, film grain"
# Product showcase
"[product] rotating on [surface], [lighting type], clean background, professional photography style"
# Urban/Architecture
"[location] at [time], [atmospheric effect], [camera angle], wide establishing shot, cinematic"
# Character/Motion
"[subject] [action], [setting], [lighting], tracking shot, smooth motion"
Handling Async Generation
Seedance 2.0 videos typically take 60-180 seconds to generate depending on length and resolution. ModelsLab returns a fetch_result URL when the status is processing. Poll this URL every 10 seconds until you get status: success.
For production applications, use webhooks instead of polling:
payload = {
"key": API_KEY,
"model_id": "seedance-t2v",
"prompt": "...",
"webhook": "https://your-app.com/webhooks/video-ready",
"track_id": "job_12345" # Your internal job ID for tracking
}
ModelsLab will POST to your webhook URL when the video is ready, including the track_id so you can match the result to your job queue.
Seedance 2.0 vs Competing Models
Here's how Seedance 2.0 compares to other video generation models available via API:
- Seedance 2.0 (seedance-t2v): Best physics realism, 20s max, 1080p, production use confirmed
- Kling 3.0: Strong character consistency, shorter clips (5-10s), excellent for people
- Sora 2: Highest visual quality ceiling, but strict content policy and high cost
- Runway Gen-4: Fast generation, good for short clips, limited to 10s max
- Wan 2.1: Open-weights, self-hostable, lower quality ceiling
For most production use cases — product videos, marketing content, short films — Seedance 2.0 via ModelsLab provides the best quality-to-cost ratio.
Error Handling and Rate Limits
import requests
from tenacity import retry, stop_after_attempt, wait_exponential
@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=2, min=5, max=30))
def generate_with_retry(prompt: str) -> str:
response = requests.post(
"https://modelslab.com/api/v6/video/text2video",
json={
"key": API_KEY,
"model_id": "seedance-t2v",
"prompt": prompt,
"height": 720,
"width": 1280
},
timeout=30
)
response.raise_for_status()
data = response.json()
if data.get("status") == "error":
raise Exception(data.get("message", "Unknown error"))
return data
# Common error responses:
# {"status": "error", "message": "Invalid API key"} → Check your key
# {"status": "error", "message": "Insufficient credits"} → Top up balance
# {"status": "error", "message": "Content policy violation"} → Adjust prompt
Pricing
ModelsLab charges per video generation. Seedance 2.0 pricing depends on resolution and duration — expect $0.05-0.15 per generation at standard settings. You can check your current balance and usage in the ModelsLab dashboard.
For high-volume production (100+ videos/day), contact ModelsLab for enterprise pricing with volume discounts and dedicated capacity.
Build With Seedance 2.0 on ModelsLab
Seedance 2.0 is production-ready and available now via the ModelsLab API. With endpoints for both text-to-video and image-to-video, you can integrate state-of-the-art video generation into your application in under an hour.
Sign up for a free ModelsLab account to get your API key and start generating. The platform includes 100 free credits to test with, and scales to millions of requests per month for production workloads.
