---
title: Wan 2.5 Text To Video API | Text to Video | ModelsLab
description: Create cinematic 10-second videos in native 4K up to 1080p with smooth motion, built-in audio, advanced prompt adherence, and professional-grade camera.
url: https://modelslab.com/models/alibaba_cloud/wan2.5-t2v.md
canonical: https://modelslab.com/models/alibaba_cloud/wan2.5-t2v.md
type: product
component: Playground/Endpoint/Index
generated_at: 2026-04-15T22:01:21.647498Z
---

[![Wan 2.5 Text to Video thumbnail](https://assets.topadvisor.com/media/_solution_logo_09222023_8164684.png)](https://modelslab.com/models/alibaba_cloud)Wan 2.5 Text To Video
---

[by Alibaba Cloud](https://modelslab.com/models/alibaba_cloud)Next-gen video generation creates cinematic-quality clips up to 10 seconds in 1080p with synchronized audio and realistic motion.

`wan2.5-t2v`

Closed Source Model [LLMs.txt](https://modelslab.com/models/alibaba_cloud/wan2.5-t2v/llms.txt) [Learn more](https://modelslab.com/wan-25-text-to-video)

[API Playground](/models/alibaba_cloud/wan2.5-t2v) [API Documentation](/models/alibaba_cloud/wan2.5-t2v/api)Vibe CodeRelated ModelsDeveloper SupportModel Specs

Input
---

Prompt 

 Shot from a low angle, in a medium close-up, with warm tones, mixed lighting (the practical light from the desk lamp blends with the overcast light from the window), side lighting, and a central composition. In a classic detective office, wooden bookshelves are filled with old case files and ashtrays. A green desk lamp illuminates a case file spread out in the center of the desk. A fox, wearing a dark brown trench coat and a light gray fedora, sits in a leather chair, its fur crimson, its tail resting lightly on the edge, its fingers slowly turning yellowed pages. Outside, a steady drizzle falls beneath a blue sky, streaking the glass with meandering streaks. It slowly raises its head, its ears twitching slightly, its amber eyes gazing directly at the camera, its mouth clearly moving as it speaks in a smooth, cynical voice: 'The case was cold, colder than a fish in winter. But every chicken has its secrets, and I, for one, intended to find them '.

Audio-URL 

Upload

File preview

 or record audio

Record Audio

Resolution 

Duration 

Advanced Settings Customize your input with more control.

Configure

Add FundsLogin to Generate

Cost for 480p: $0.05/second, 720p: $0.10/second and 1080p: $0.15/second

Output
---

Idle

Unknown content type

Related Models
---

Discover similar models you might be interested in

 [View all Video Models](https://modelslab.com/models?feature=videogen)

[![LTX 2 PRO Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f454ab2a-00f1-4e92-afbf-5b8b3775ac35.webp)](https://modelslab.com/models/ltx/ltx-2-pro-t2v)[ltx](https://modelslab.com/models/ltx)

 [LTX 2 PRO Text To Video

Closed Source Model](https://modelslab.com/models/ltx/ltx-2-pro-t2v)

[![wan2.6 Image To Video (Flash)](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/118afa95-ea11-4749-8b78-d513b799afcd.webp)](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v-flash)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [wan2.6 Image To Video (Flash)

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v-flash)

[![Seedance Text To video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/ccecf39d-cff5-4ebb-94b6-41f3685dfb9f.webp)](https://modelslab.com/models/byteplus/seedance-t2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance Text To video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-t2v)

[![Omnihuman](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9bf40816-3e9c-40a7-9f2d-e238b1fc43d3.webp)](https://modelslab.com/models/byteplus/omni-human)[Bytedance](https://modelslab.com/models/byteplus)

 [Omnihuman

Closed Source Model](https://modelslab.com/models/byteplus/omni-human)

[![Kling V2.5 Turbo Image To Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/5bff3d58-58d2-494a-98df-1dffc0dc58b3.webp)](https://modelslab.com/models/klingai/Kling-V2-5-Turbo-i2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2.5 Turbo Image To Video

Closed Source Model](https://modelslab.com/models/klingai/Kling-V2-5-Turbo-i2v)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![Kling V2.1  Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/17b21170-bd49-4758-b9c0-9475512d71d6.webp)](https://modelslab.com/models/klingai/kling-v2-1-i2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2.1 Image To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-1-i2v)

[![Seedance 1.0 Pro Fast Text to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9a86a500-9dee-4489-858e-39fe9d9f3066.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Fast Text to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)

[![Grok Imagine Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f132b8f2-c565-4aa5-a09e-e022044b3106.png)](https://modelslab.com/models/xai/grok-imagine-video-i2v)[xAI](https://modelslab.com/models/xai)

 [Grok Imagine Image To Video

Closed Source Model](https://modelslab.com/models/xai/grok-imagine-video-i2v)

[![Seedance Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/5c66f504-1587-4765-b3ca-51813239ce46.webp)](https://modelslab.com/models/byteplus/seedance-i2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance Image to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-i2v)

[![Hailuo 02 Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/89f858fe-f734-4c69-a6e0-e4eff44a6417.webp)](https://modelslab.com/models/minimax/Hailuo-02-i2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 02 Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-02-i2v)

[![Kling V2.5 Turbo Text To Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/a66ac4fb-91df-4b23-ae0c-126c5f1f38a8.webp)](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2.5 Turbo Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)

[![Hailuo 02 Start/End Frame Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/2c685f03-1f3c-412b-b830-3d3113512d43.webp)](https://modelslab.com/models/minimax/Hailuo-02-start-end-frame%20)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 02 Start/End Frame Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-02-start-end-frame%20)

[![Seedance 1.0 Pro Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/c7e4b47a-4c1c-46cb-b662-c939273bf876.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-i2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Image to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-i2v)

[![Wan 2.5 Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/148eee62-7eb6-4803-be6c-64224377cd6b.webp)](https://modelslab.com/models/alibaba_cloud/wan2.5-i2v)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [Wan 2.5 Image to Video

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.5-i2v)

[![CogVideoX](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/VmhYqa98ohanHj8vL6mQjkr5TG2sSS-metaY29ndmlkZW94LndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/cogvideox)[ModelsLab](https://modelslab.com/models/modelslab)

 [CogVideoX

Open Source Model](https://modelslab.com/models/modelslab/cogvideox)

[![Hailuo 2.3 Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/567c802c-05e1-45f9-8278-9ee7c35388b6.webp)](https://modelslab.com/models/minimax/Hailuo-2.3-t2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 2.3 Text To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-2.3-t2v)

[![kling V2.1 Master Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/4c066928-ae15-4d08-99f0-1cdcab52742c.webp)](https://modelslab.com/models/klingai/kling-v2-1-master-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [kling V2.1 Master Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-1-master-t2v)

Open Source Alternatives
---

Explore open-source models that offer similar capabilities with full transparency and flexibility

 [View all open source models](https://modelslab.com/models?feature=video&provider=open-source-models)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![CogVideoX](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/VmhYqa98ohanHj8vL6mQjkr5TG2sSS-metaY29ndmlkZW94LndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/cogvideox)[ModelsLab](https://modelslab.com/models/modelslab)

 [CogVideoX

Open Source Model](https://modelslab.com/models/modelslab/cogvideox)

[![wan2.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/04d08a15-bc50-43e7-96e5-5342c249cf50.webp)](https://modelslab.com/models/modelslab/wan2.1)[ModelsLab](https://modelslab.com/models/modelslab)

 [wan2.1

Open Source Model](https://modelslab.com/models/modelslab/wan2.1)

About Wan 2.5 Text To Video
---

Next-gen video generation creates cinematic-quality clips up to 10 seconds in 1080p with synchronized audio and realistic motion.

### Technical Specifications

Model IDwan2.5-t2vProviderAlibaba CloudCategoryVideo ModelsTaskVideo GenerationPrice$0.500000 per multiplierAddedSeptember 25, 2025

### Key Features

- AI video generation from text or image input
- Motion control and camera movement parameters
- Adjustable frame rate and video duration
- High-quality cinematic output up to 1080p
- Native audio generation support

### Quick Start

Integrate Wan 2.5 Text To Video into your application with a single API call. Get your API key from the [pricing page](https://modelslab.com/pricing) to get started.

PythonJavaScriptcURLPHP

```
<code>import requests
import json

url = "https://modelslab.com/api/v7/video-fusion/text-to-video"

headers = {
    "Content-Type": "application/json"
}

data = {
        "model_id": "wan2.5-t2v",
        "prompt": "your prompt here",
        "key": "YOUR_API_KEY"
    }

try:
    response = requests.post(url, headers=headers, json=data)
    response.raise_for_status()  # Raises an HTTPError for bad responses (4XX or 5XX)
    result = response.json()
    print("API Response:")
    print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
    print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
    print(f"Other error occurred: {err}")</code>
```

View the [full API documentation](https://modelslab.com/models/alibaba_cloud/wan2.5-t2v/api) for SDKs, code examples in Python, JavaScript, and more.

### Pricing

Wan 2.5 Text To Video API costs $0.500000 per multiplier. Pay only for what you use with no minimum commitments. [View pricing plans](https://modelslab.com/pricing)

### Use Cases

- Marketing and promotional video creation
- Social media short-form video content
- Product demos and explainer videos
- Creative storytelling and animation

[Learn more about Wan 2.5 Text To Video](https://modelslab.com/wan-25-text-to-video) [Browse Video Models](https://modelslab.com/models?feature=videogen) [More from Alibaba Cloud](https://modelslab.com/models/alibaba_cloud) [View Pricing](https://modelslab.com/pricing)

Wan 2.5 Text To Video FAQ
---

### What is Wan 2.5 Text To Video?

Next-gen video generation creates cinematic-quality clips up to 10 seconds in 1080p with synchronized audio and realistic motion.

### How do I use the Wan 2.5 Text To Video API?

You can integrate Wan 2.5 Text To Video into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "wan2.5-t2v" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

### How much does Wan 2.5 Text To Video cost?

Wan 2.5 Text To Video costs $0.500000 per multiplier. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

### What is the Wan 2.5 Text To Video model ID?

The model ID for Wan 2.5 Text To Video is "wan2.5-t2v". Use this ID in your API requests to specify this model.

### Does Wan 2.5 Text To Video have a free tier?

Yes, ModelsLab offers a free tier that lets you try Wan 2.5 Text To Video and other AI models. Sign up to get free API credits and start building immediately.

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-04-16*