Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content
lipsync-2 thumbnail

Lipsync-2

by Sync

Zero-shot video lip sync AI instantly matches spoken audio to mouth movements in live-action, animation, or AI-generated clips—no training needed, preserves speaker style, supports 4K, and enables multilingual dubbing and post-production dialogue editing.

lipsync-2
Closed Source ModelLLMs.txtLearn more
API PlaygroundAPI Documentation

Input

File preview

File preview

or record audio

Per sec will cost 0.07$

Output

Idle

Unknown content type

Open Source Alternatives

Explore open-source models that offer similar capabilities with full transparency and flexibility

View all open source models

About Lipsync-2

Zero-shot video lip sync AI instantly matches spoken audio to mouth movements in live-action, animation, or AI-generated clips—no training needed, preserves speaker style, supports 4K, and enables multilingual dubbing and post-production dialogue editing.

Technical Specifications

Model ID
lipsync-2
Provider
Sync
Category
Video Models
Task
Video Generation
Price
$0.070000 per second
Added
June 27, 2025

Key Features

  • AI video generation from text or image input
  • Motion control and camera movement parameters
  • Adjustable frame rate and video duration
  • High-quality cinematic output up to 1080p
  • Native audio generation support

Quick Start

Integrate Lipsync-2 into your application with a single API call. Get your API key from the pricing page to get started.

import requests
import json
url = "https://modelslab.com/api/v7/video-fusion/lip-sync"
headers = {
"Content-Type": "application/json"
}
data = {
"model_id": "lipsync-2",
"prompt": "your prompt here",
"key": "YOUR_API_KEY"
}
try:
response = requests.post(url, headers=headers, json=data)
response.raise_for_status() # Raises an HTTPError for bad responses (4XX or 5XX)
result = response.json()
print("API Response:")
print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
print(f"Other error occurred: {err}")

View the full API documentation for SDKs, code examples in Python, JavaScript, and more.

Pricing

Lipsync-2 API costs $0.070000 per second. Pay only for what you use with no minimum commitments. View pricing plans

Use Cases

  • Marketing and promotional video creation
  • Social media short-form video content
  • Product demos and explainer videos
  • Creative storytelling and animation

Lipsync-2 FAQ

Zero-shot video lip sync AI instantly matches spoken audio to mouth movements in live-action, animation, or AI-generated clips—no training needed, preserves speaker style, supports 4K, and enables multilingual dubbing and post-production dialogue editing.

You can integrate Lipsync-2 into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "lipsync-2" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

Lipsync-2 costs $0.070000 per second. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

The model ID for Lipsync-2 is "lipsync-2". Use this ID in your API requests to specify this model.

Yes, ModelsLab offers a free tier that lets you try Lipsync-2 and other AI models. Sign up to get free API credits and start building immediately.