# Wan2.6 Text To Video > Generate cinematic 1080p videos at 24fps up to 15s from text, with native lip-sync audio, multi-shot storytelling, and reference support for ads and social media.[1][2][4] ## Overview - **Model ID**: `wan2.6-t2v` - **Category**: video - **Provider**: alibaba_cloud - **Status**: model_ready - **Screenshot**: `https://assets.modelslab.com/generations/6cb96cc2-d10a-4af3-bb89-c97e8795c697.webp` ## API Information This model can be used via our HTTP API. See the API documentation and usage examples below. ### Endpoint - **URL**: `https://modelslab.com/api/v7/video-fusion/text-to-video` - **Method**: POST ### Parameters - **`init_audio`** (required): The video content will attempt to align with the audio content, such as lip movements and rhythm. Format: WAV, MP3. If the audio duration exceeds the duration value (5 or 10 seconds), the first 5 or 10 seconds are automatically used, and the rest is discarded. If the audio is shorter than the video duration, the part of the video beyond the audio length will be silent. - Type: file - **`prompt`** (required): Enter a prompt to define the actions you want your image to perform. - Type: textarea - **`model_id`** (optional): Model_id for selecting the model from mutiple models - Type: text - **`duration`** (optional): The duration of the generated video in seconds. - Type: select (options: 5, 10, 15) - **`resolution`** (optional): The resolution of the generated video in pixel. - Type: select (options: 720p, 1080p) ## Usage Examples ### cURL ```bash curl --request POST \ --url https://modelslab.com/api/v7/video-fusion/text-to-video \ --header "Content-Type: application/json" \ --data '{ "key": "YOUR_API_KEY", "model_id": "wan2.6-t2v", "init_audio": "https://assets.modelslab.com/generations/74c4f2e6-2fa6-4d8f-a0e3-09ff1a94d9e1.mp3", "prompt": "I man talking towards camera from great wall of china and saying, Welcome to my vlogs the beautiful views from this place is breathetaking and amazing you should also come here", "duration": "5", "resolution": "720p" }' ``` ### Python ```python import requests response = requests.post( "https://modelslab.com/api/v7/video-fusion/text-to-video", headers={ "Content-Type": "application/json" }, json={ "key": "YOUR_API_KEY", "model_id": "wan2.6-t2v", "init_audio": "https://assets.modelslab.com/generations/74c4f2e6-2fa6-4d8f-a0e3-09ff1a94d9e1.mp3", "prompt": "I man talking towards camera from great wall of china and saying, Welcome to my vlogs the beautiful views from this place is breathetaking and amazing you should also come here", "duration": "5", "resolution": "720p" } ) print(response.json()) ``` ### JavaScript ```javascript fetch("https://modelslab.com/api/v7/video-fusion/text-to-video", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ "key": "YOUR_API_KEY", "model_id": "wan2.6-t2v", "init_audio": "https://assets.modelslab.com/generations/74c4f2e6-2fa6-4d8f-a0e3-09ff1a94d9e1.mp3", "prompt": "I man talking towards camera from great wall of china and saying, Welcome to my vlogs the beautiful views from this place is breathetaking and amazing you should also come here", "duration": "5", "resolution": "720p" }) }) .then(response => response.json()) .then(data => console.log(data)); ``` ## Links - [Model Playground](https://modelslab.com/models/wan-2.6-text-to-video/wan2.6-t2v) - [API Documentation](https://docs.modelslab.com) - [ModelsLab Platform](https://modelslab.com)