---
title: Wan2.6 Text To Video  API | Text to Video | ModelsLab
description: Generate cinematic 1080p videos at 24fps up to 15s from text, with native lip-sync audio, multi-shot storytelling, and reference support for ads and.
url: https://modelslab.com/models/alibaba_cloud/wan2.6-t2v.md
canonical: https://modelslab.com/models/alibaba_cloud/wan2.6-t2v.md
type: product
component: Playground/Endpoint/Index
generated_at: 2026-04-17T08:33:35.548613Z
---

[![Wan2.6 Text To Video  thumbnail](https://assets.topadvisor.com/media/_solution_logo_09222023_8164684.png)](https://modelslab.com/models/alibaba_cloud)Wan2.6 Text To Video 
---

[by Alibaba Cloud](https://modelslab.com/models/alibaba_cloud)Generate cinematic 1080p videos at 24fps up to 15s from text, with native lip-sync audio, multi-shot storytelling, and reference support for ads and social media.\[1\]\[2\]\[4\]

`wan2.6-t2v`

Closed Source Model [LLMs.txt](https://modelslab.com/models/alibaba_cloud/wan2.6-t2v/llms.txt) [Learn more](https://modelslab.com/wan26-text-to-video)

[API Playground](/models/alibaba_cloud/wan2.6-t2v) [API Documentation](/models/alibaba_cloud/wan2.6-t2v/api)Vibe CodeRelated ModelsDeveloper SupportModel Specs

Input
---

Audio-URL 

Upload

File preview

 or record audio

Record Audio

Prompt 

I man talking towards camera from great wall of china and saying, Welcome to my vlogs the beautiful views from this place is breathetaking and amazing you should also come here

Advanced Settings Customize your input with more control.

Configure

Add FundsLogin to Generate

Cost for 720P: **$0.10/second**, 1080P: **$0.15/second**

Output
---

Idle

Unknown content type

Related Models
---

Discover similar models you might be interested in

 [View all Video Models](https://modelslab.com/models?feature=videogen)

[![Grok Imagine Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f132b8f2-c565-4aa5-a09e-e022044b3106.png)](https://modelslab.com/models/xai/grok-imagine-video-i2v)[xAI](https://modelslab.com/models/xai)

 [Grok Imagine Image To Video

Closed Source Model](https://modelslab.com/models/xai/grok-imagine-video-i2v)

[![Veo 3 Fast preview](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/707c2544-a4e2-4f55-8db9-fd20fb82063b.webp)](https://modelslab.com/models/google/veo-3.0-fast-generate-preview)[Google](https://modelslab.com/models/google)

 [Veo 3 Fast preview

Closed Source Model](https://modelslab.com/models/google/veo-3.0-fast-generate-preview)

[![Wan 2.5 Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/148eee62-7eb6-4803-be6c-64224377cd6b.webp)](https://modelslab.com/models/alibaba_cloud/wan2.5-i2v)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [Wan 2.5 Image to Video

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.5-i2v)

[![Kling V1.6 Multi Image To Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/8c9627de-55bb-4272-8b4f-839fa8d53b9d.webp)](https://modelslab.com/models/klingai/kling-v1-6)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V1.6 Multi Image To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v1-6)

[![Hailuo 02 Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f45a761f-b956-4b06-9ffc-0771f87ba481.webp)](https://modelslab.com/models/minimax/Hailuo-02-t2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 02 Text To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-02-t2v)

[![Seedance 1.0 Pro Fast Text to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9a86a500-9dee-4489-858e-39fe9d9f3066.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Fast Text to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)

[![Kling V2.5 Turbo Text To Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/a66ac4fb-91df-4b23-ae0c-126c5f1f38a8.webp)](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2.5 Turbo Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)

[![Seedance Text To video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/ccecf39d-cff5-4ebb-94b6-41f3685dfb9f.webp)](https://modelslab.com/models/byteplus/seedance-t2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance Text To video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-t2v)

[![Veo 3 Fast](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/be91331a-e169-469a-af97-7552fc1510ee.webp)](https://modelslab.com/models/google/veo-3.0-fast-generate)[Google](https://modelslab.com/models/google)

 [Veo 3 Fast

Closed Source Model](https://modelslab.com/models/google/veo-3.0-fast-generate)

[![lipsync-2](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/3bd64ea0-8ec5-4cfb-b0ca-ac3bbabafd7b.webp)](https://modelslab.com/models/sync/lipsync-2)[Sync.so](https://modelslab.com/models/sync)

 [lipsync-2

Closed Source Model](https://modelslab.com/models/sync/lipsync-2)

[![Veo 3.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/044f763e-8b9a-49a9-9c7c-317de8b13c9a.webp)](https://modelslab.com/models/google/veo-3.1)[Google](https://modelslab.com/models/google)

 [Veo 3.1

Closed Source Model](https://modelslab.com/models/google/veo-3.1)

[![kling V2.1 Master Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/4c066928-ae15-4d08-99f0-1cdcab52742c.webp)](https://modelslab.com/models/klingai/kling-v2-1-master-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [kling V2.1 Master Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-1-master-t2v)

[![Veo 3](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/aa80e1de-af63-4efb-ad84-004abde2d663.webp)](https://modelslab.com/models/google/veo3)[Google](https://modelslab.com/models/google)

 [Veo 3

Closed Source Model](https://modelslab.com/models/google/veo3)

[![Seedance Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/5c66f504-1587-4765-b3ca-51813239ce46.webp)](https://modelslab.com/models/byteplus/seedance-i2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance Image to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-i2v)

[![Veo 2](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/72987a72-d15f-4584-b921-8c9d8dce9bd2.webp)](https://modelslab.com/models/google/veo2)[Google](https://modelslab.com/models/google)

 [Veo 2

Closed Source Model](https://modelslab.com/models/google/veo2)

[![Kling V2 Master Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/e2bbd2c7-98bb-422a-ab66-09a2200c1046.webp)](https://modelslab.com/models/klingai/kling-v2-master-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2 Master Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-master-t2v)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![Omnihuman](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9bf40816-3e9c-40a7-9f2d-e238b1fc43d3.webp)](https://modelslab.com/models/byteplus/omni-human)[Bytedance](https://modelslab.com/models/byteplus)

 [Omnihuman

Closed Source Model](https://modelslab.com/models/byteplus/omni-human)

Open Source Alternatives
---

Explore open-source models that offer similar capabilities with full transparency and flexibility

 [View all open source models](https://modelslab.com/models?feature=video&provider=open-source-models)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![CogVideoX](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/VmhYqa98ohanHj8vL6mQjkr5TG2sSS-metaY29ndmlkZW94LndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/cogvideox)[ModelsLab](https://modelslab.com/models/modelslab)

 [CogVideoX

Open Source Model](https://modelslab.com/models/modelslab/cogvideox)

[![wan2.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/04d08a15-bc50-43e7-96e5-5342c249cf50.webp)](https://modelslab.com/models/modelslab/wan2.1)[ModelsLab](https://modelslab.com/models/modelslab)

 [wan2.1

Open Source Model](https://modelslab.com/models/modelslab/wan2.1)

About Wan2.6 Text To Video 
---

Generate cinematic 1080p videos at 24fps up to 15s from text, with native lip-sync audio, multi-shot storytelling, and reference support for ads and social media.\[1\]\[2\]\[4\]

### Technical Specifications

Model IDwan2.6-t2vProviderAlibaba CloudCategoryVideo ModelsTaskVideo GenerationPrice$0.500000 per multiplierAddedDecember 16, 2025

### Key Features

- AI video generation from text or image input
- Motion control and camera movement parameters
- Adjustable frame rate and video duration
- High-quality cinematic output up to 1080p
- Native audio generation support

### Quick Start

Integrate Wan2.6 Text To Video  into your application with a single API call. Get your API key from the [pricing page](https://modelslab.com/pricing) to get started.

PythonJavaScriptcURLPHP

```
<code>import requests
import json

url = "https://modelslab.com/api/v7/video-fusion/text-to-video"

headers = {
    "Content-Type": "application/json"
}

data = {
        "model_id": "wan2.6-t2v",
        "prompt": "your prompt here",
        "key": "YOUR_API_KEY"
    }

try:
    response = requests.post(url, headers=headers, json=data)
    response.raise_for_status()  # Raises an HTTPError for bad responses (4XX or 5XX)
    result = response.json()
    print("API Response:")
    print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
    print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
    print(f"Other error occurred: {err}")</code>
```

View the [full API documentation](https://modelslab.com/models/alibaba_cloud/wan2.6-t2v/api) for SDKs, code examples in Python, JavaScript, and more.

### Pricing

Wan2.6 Text To Video  API costs $0.500000 per multiplier. Pay only for what you use with no minimum commitments. [View pricing plans](https://modelslab.com/pricing)

### Use Cases

- Marketing and promotional video creation
- Social media short-form video content
- Product demos and explainer videos
- Creative storytelling and animation

[Learn more about Wan2.6 Text To Video ](https://modelslab.com/wan26-text-to-video) [Browse Video Models](https://modelslab.com/models?feature=videogen) [More from Alibaba Cloud](https://modelslab.com/models/alibaba_cloud) [View Pricing](https://modelslab.com/pricing)

Wan2.6 Text To Video FAQ
---

### What is Wan2.6 Text To Video ?

Generate cinematic 1080p videos at 24fps up to 15s from text, with native lip-sync audio, multi-shot storytelling, and reference support for ads and social media.\[1\]\[2\]\[4\]

### How do I use the Wan2.6 Text To Video API?

You can integrate Wan2.6 Text To Video into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "wan2.6-t2v" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

### How much does Wan2.6 Text To Video cost?

Wan2.6 Text To Video costs $0.500000 per multiplier. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

### What is the Wan2.6 Text To Video model ID?

The model ID for Wan2.6 Text To Video is "wan2.6-t2v". Use this ID in your API requests to specify this model.

### Does Wan2.6 Text To Video have a free tier?

Yes, ModelsLab offers a free tier that lets you try Wan2.6 Text To Video and other AI models. Sign up to get free API credits and start building immediately.

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-04-17*