Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content
Available now on ModelsLab · Video Generation

Hailuo 02 Start/End Frame Image To VideoKeyframe videos. Precisely.

Control Every Frame. Generate Faster.

Precise Control

Start and End Frame

Set beginning and ending positions to morph images into seamless video transitions.

Cinema Quality

Physics-Aware Motion

Frame-by-frame generation simulates realistic fur, fluid, lighting, and physics automatically.

Fast Processing

4-8 Minute Generation

Standard 768p renders in 4 minutes; Pro 1080p in 8 minutes for production workflows.

Examples

See what Hailuo 02 Start/End Frame Image To Video can create

Copy any prompt below and try it yourself in the playground.

Urban Timelapse

A cityscape at sunset transitioning to night, golden hour lighting fading to neon signs, camera panning across skyscrapers, cinematic depth of field.

Product Transform

Sleek smartphone rotating on a minimalist white surface, soft studio lighting, product photography style, shallow depth of field highlighting details.

Nature Motion

Mountain landscape with clouds drifting across peaks, golden sunlight illuminating valleys, camera slowly tilting upward, cinematic composition.

Fluid Graphics

Abstract liquid paint flowing and morphing between vibrant colors, smooth transitions, professional motion graphics style, high contrast lighting.

For Developers

A few lines of code.
Two frames. Infinite motion.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per second, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/image-to-video",
json={
"key": "YOUR_API_KEY",
"prompt": "Animate the seasonal transformation — snow melting, colors emerging, flowers blooming, and the light warming up as winter fades into spring.",
"init_image": ""
}
)
print(response.json())

FAQ

Common questions about Hailuo 02 Start/End Frame Image To Video

Read the docs

Hailuo 02 Start/End Frame is an AI video generation model that converts static images into dynamic videos by setting precise start and end frames. You provide two images (before and after), add a motion description, and the model generates a seamless video transition between them.

Upload your first frame and last frame, then describe the video content you want generated. The model interpolates realistic motion between the two keyframes using physics-aware diffusion. You can also use End Frame Only if you only need the ending position.

Standard model supports 768p at 6 or 10 seconds. Pro model supports 1080p at 6 seconds. Both output at 25fps in MP4 format with processing times of approximately 4 minutes (Standard) and 8 minutes (Pro).

Supported formats include JPG, JPEG, PNG, WebP, GIF, and AVIF. Images must have an aspect ratio between 2:5 and 5:2, minimum 300px on the shorter side, and maximum file size of 20MB for optimal results.

This model gives you precise keyframe control by setting both beginning and ending positions, enabling creative morphing effects and directional motion. Standard image-to-video generates motion from a single image without endpoint specification.

Use high-quality source images with clear subjects, write descriptive prompts specifying motion type and cinematography details, and ensure images meet technical specifications. Detailed prompts about lighting, composition, and camera movement yield better results.

Ready to create?

Start generating with Hailuo 02 Start/End Frame Image To Video on ModelsLab.