---
title: Wan2.6 Image To Video  API | Image to Video | ModelsLab
description: Generate cinematic 1080p, 24fps videos from a single image with multi-shot storytelling, native lip-sync, 15s length, and consumer-GPU support.
url: https://modelslab.com/models/alibaba_cloud/wan2.6-i2v.md
canonical: https://modelslab.com/models/alibaba_cloud/wan2.6-i2v.md
type: product
component: Playground/Endpoint/Index
generated_at: 2026-04-16T21:09:18.029807Z
---

[![Wan2.6 Image To Video  thumbnail](https://assets.topadvisor.com/media/_solution_logo_09222023_8164684.png)](https://modelslab.com/models/alibaba_cloud)Wan2.6 Image To Video 
---

[by Alibaba Cloud](https://modelslab.com/models/alibaba_cloud)Generate cinematic 1080p, 24fps videos from a single image with multi-shot storytelling, native lip-sync, 15s length, and consumer-GPU support.

`wan2.6-i2v`

Closed Source Model [LLMs.txt](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v/llms.txt) [Learn more](https://modelslab.com/wan26-image-to-video)

[API Playground](/models/alibaba_cloud/wan2.6-i2v) [API Documentation](/models/alibaba_cloud/wan2.6-i2v/api)Vibe CodeRelated ModelsDeveloper SupportModel Specs

Input
---

Image URL 

Upload

File preview

Audio-URL 

Upload

File preview

 or record audio

Record Audio

Prompt 

The person from the reference image is a travel vlogger standing on the Great Wall of China, speaking directly to the camera in a natural vlog style. Multishot cinematic sequence starting with a medium close-up selfie shot, the vlogger holding the camera, relaxed expression, light wind, then a smooth pan revealing the Great Wall stretching across the mountains with tourists clearly visible in the background, followed by an over-the-shoulder shot of the vlogger pointing toward the scenic views. The vlogger says clearly and naturally for about 5 seconds: “Right now, I’m standing on the Great Wall of China… and the view here is absolutely unreal.” Add realistic outdoor ambience with soft wind sounds, distant crowd murmurs, footsteps on stone, and clean vlog-style voice audio. Ultra-realistic visuals, perfect face consistency with the reference image, sharp background details, natural daylight, cinematic color grading, stable camera motion, authentic travel vlog mood, immersive and inspiring atmosphere, no distortions, no extra people, duration approximately 5 seconds.

Duration 

Resolution 

Advanced Settings Customize your input with more control.

Configure

Add FundsLogin to Generate

Cost for 720P: $0.10/second, 1080P: $0.15/second

Output
---

Idle

Unknown content type

Related Models
---

Discover similar models you might be interested in

 [View all Video Models](https://modelslab.com/models?feature=videogen)

[![Seedance 1.0 Pro Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/c7e4b47a-4c1c-46cb-b662-c939273bf876.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-i2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Image to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-i2v)

[![Wan 2.5 Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/148eee62-7eb6-4803-be6c-64224377cd6b.webp)](https://modelslab.com/models/alibaba_cloud/wan2.5-i2v)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [Wan 2.5 Image to Video

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.5-i2v)

[![Gen4 Aleph (Video Edit)](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/8a41fc4d-9e2c-465f-b1e2-343e7ef681c7.webp)](https://modelslab.com/models/runway_ml/gen4_aleph)[Runway ML](https://modelslab.com/models/runway_ml)

 [Gen4 Aleph (Video Edit)

Closed Source Model](https://modelslab.com/models/runway_ml/gen4_aleph)

[![Wan 2.5 Text to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/01de6ee6-6738-4a78-a25c-b0a30e72e2e9.webp)](https://modelslab.com/models/alibaba_cloud/wan2.5-t2v)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [Wan 2.5 Text to Video

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.5-t2v)

[![Grok Imagine Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/66741ffe-f704-47ef-92d5-32fec90bcc7a.webp)](https://modelslab.com/models/xai/grok-imagine-video-t2v)[xAI](https://modelslab.com/models/xai)

 [Grok Imagine Text To Video

Closed Source Model](https://modelslab.com/models/xai/grok-imagine-video-t2v)

[![Kling V2.5 Turbo Text To Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/a66ac4fb-91df-4b23-ae0c-126c5f1f38a8.webp)](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2.5 Turbo Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)

[![Hailuo 2.3 Fast Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/e7b31183-59f6-4e12-87dc-1b7eaa3f0be1.webp)](https://modelslab.com/models/minimax/Hailuo-2.3-Fast-i2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 2.3 Fast Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-2.3-Fast-i2v)

[![Kling V2 Master Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/e6b80850-fe84-43cf-b5e6-7b13b0f77e08.webp)](https://modelslab.com/models/klingai/kling-v2-master-i2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2 Master Image To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-master-i2v)

[![Hailuo 02 Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/89f858fe-f734-4c69-a6e0-e4eff44a6417.webp)](https://modelslab.com/models/minimax/Hailuo-02-i2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 02 Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-02-i2v)

[![Veo 2](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/72987a72-d15f-4584-b921-8c9d8dce9bd2.webp)](https://modelslab.com/models/google/veo2)[Google](https://modelslab.com/models/google)

 [Veo 2

Closed Source Model](https://modelslab.com/models/google/veo2)

[![LTX 2 PRO Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f454ab2a-00f1-4e92-afbf-5b8b3775ac35.webp)](https://modelslab.com/models/ltx/ltx-2-pro-t2v)[ltx](https://modelslab.com/models/ltx)

 [LTX 2 PRO Text To Video

Closed Source Model](https://modelslab.com/models/ltx/ltx-2-pro-t2v)

[![Kling V1.6 Multi Image To Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/8c9627de-55bb-4272-8b4f-839fa8d53b9d.webp)](https://modelslab.com/models/klingai/kling-v1-6)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V1.6 Multi Image To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v1-6)

[![LTX 2 PRO Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/c1f3b2ed-f802-4ce1-a425-5fbce6ffd5e3.webp)](https://modelslab.com/models/ltx/ltx-2-pro-i2v)[ltx](https://modelslab.com/models/ltx)

 [LTX 2 PRO Image To Video

Closed Source Model](https://modelslab.com/models/ltx/ltx-2-pro-i2v)

[![Hailuo 2.3 Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9fe4b53a-0363-4c1c-94ab-b3153fa8b97b.webp)](https://modelslab.com/models/minimax/Hailuo-2.3-i2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 2.3 Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-2.3-i2v)

[![wan2.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/04d08a15-bc50-43e7-96e5-5342c249cf50.webp)](https://modelslab.com/models/modelslab/wan2.1)[ModelsLab](https://modelslab.com/models/modelslab)

 [wan2.1

Open Source Model](https://modelslab.com/models/modelslab/wan2.1)

[![Sora 2 Pro Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/28616b8b-e430-4ce6-b693-768f995ec195.webp)](https://modelslab.com/models/openai/sora-2-pro-t2v)[Open Ai](https://modelslab.com/models/openai)

 [Sora 2 Pro Text To Video

Closed Source Model](https://modelslab.com/models/openai/sora-2-pro-t2v)

[![Seedance 1.0 Pro Fast Text to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9a86a500-9dee-4489-858e-39fe9d9f3066.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Fast Text to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)

[![Hailuo 2.3 Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/567c802c-05e1-45f9-8278-9ee7c35388b6.webp)](https://modelslab.com/models/minimax/Hailuo-2.3-t2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 2.3 Text To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-2.3-t2v)

Open Source Alternatives
---

Explore open-source models that offer similar capabilities with full transparency and flexibility

 [View all open source models](https://modelslab.com/models?feature=video&provider=open-source-models)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![CogVideoX](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/VmhYqa98ohanHj8vL6mQjkr5TG2sSS-metaY29ndmlkZW94LndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/cogvideox)[ModelsLab](https://modelslab.com/models/modelslab)

 [CogVideoX

Open Source Model](https://modelslab.com/models/modelslab/cogvideox)

[![wan2.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/04d08a15-bc50-43e7-96e5-5342c249cf50.webp)](https://modelslab.com/models/modelslab/wan2.1)[ModelsLab](https://modelslab.com/models/modelslab)

 [wan2.1

Open Source Model](https://modelslab.com/models/modelslab/wan2.1)

About Wan2.6 Image To Video 
---

Generate cinematic 1080p, 24fps videos from a single image with multi-shot storytelling, native lip-sync, 15s length, and consumer-GPU support.

### Technical Specifications

Model IDwan2.6-i2vProviderAlibaba CloudCategoryVideo ModelsTaskVideo GenerationPrice$0.500000 per multiplierAddedDecember 16, 2025

### Key Features

- AI video generation from text or image input
- Motion control and camera movement parameters
- Adjustable frame rate and video duration
- High-quality cinematic output up to 1080p
- Native audio generation support

### Quick Start

Integrate Wan2.6 Image To Video  into your application with a single API call. Get your API key from the [pricing page](https://modelslab.com/pricing) to get started.

PythonJavaScriptcURLPHP

```
<code>import requests
import json

url = "https://modelslab.com/api/v7/video-fusion/image-to-video"

headers = {
    "Content-Type": "application/json"
}

data = {
        "model_id": "wan2.6-i2v",
        "prompt": "your prompt here",
        "key": "YOUR_API_KEY"
    }

try:
    response = requests.post(url, headers=headers, json=data)
    response.raise_for_status()  # Raises an HTTPError for bad responses (4XX or 5XX)
    result = response.json()
    print("API Response:")
    print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
    print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
    print(f"Other error occurred: {err}")</code>
```

View the [full API documentation](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v/api) for SDKs, code examples in Python, JavaScript, and more.

### Pricing

Wan2.6 Image To Video  API costs $0.500000 per multiplier. Pay only for what you use with no minimum commitments. [View pricing plans](https://modelslab.com/pricing)

### Use Cases

- Marketing and promotional video creation
- Social media short-form video content
- Product demos and explainer videos
- Creative storytelling and animation

[Learn more about Wan2.6 Image To Video ](https://modelslab.com/wan26-image-to-video) [Browse Video Models](https://modelslab.com/models?feature=videogen) [More from Alibaba Cloud](https://modelslab.com/models/alibaba_cloud) [View Pricing](https://modelslab.com/pricing)

Wan2.6 Image To Video FAQ
---

### What is Wan2.6 Image To Video ?

Generate cinematic 1080p, 24fps videos from a single image with multi-shot storytelling, native lip-sync, 15s length, and consumer-GPU support.

### How do I use the Wan2.6 Image To Video API?

You can integrate Wan2.6 Image To Video into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "wan2.6-i2v" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

### How much does Wan2.6 Image To Video cost?

Wan2.6 Image To Video costs $0.500000 per multiplier. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

### What is the Wan2.6 Image To Video model ID?

The model ID for Wan2.6 Image To Video is "wan2.6-i2v". Use this ID in your API requests to specify this model.

### Does Wan2.6 Image To Video have a free tier?

Yes, ModelsLab offers a free tier that lets you try Wan2.6 Image To Video and other AI models. Sign up to get free API credits and start building immediately.

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-04-17*