# Omnihuman-1.5 > OmniHuman 1.5 — a film-grade digital human model that turns a single image, audio, and text prompt into lifelike video performances. It supports full prompt input, unrestricted camera and character movement, and intelligent audio understanding for natural, expressive, and story-driven results. ## Overview - **Model ID**: `omni-human-1.5` - **Category**: video - **Provider**: byteplus - **Status**: model_ready - **Screenshot**: `https://assets.modelslab.com/generations/22dd3233-37b6-43a4-a48a-1bb951d23e90.webp` ## API Information This model can be used via our HTTP API. See the API documentation and usage examples below. ### Endpoint - **URL**: `https://modelslab.com/api/v7/video-fusion/image-to-video` - **Method**: POST ### Parameters - **`init_image`** (required): - Type: file - **`init_audio`** (required): - Type: file - **`prompt`** (required): - Type: textarea - **`model_id`** (optional): - Type: text ## Usage Examples ### cURL ```bash curl --request POST \ --url https://modelslab.com/api/v7/video-fusion/image-to-video \ --header "Content-Type: application/json" \ --data '{ "key": "YOUR_API_KEY", "model_id": "omni-human-1.5", "init_image": "https://assets.modelslab.com/generations/8931fb55-905f-4ae8-8924-1b4e583ff789.png", "init_audio": "https://assets.modelslab.com/generations/7e1221ae-c5a9-4b1a-96cb-3448cc73c6e3.m4a", "prompt": "The camera zoomed in. The woman spoke to the camera, and after finishing, she quickly turned around and ran backward." }' ``` ### Python ```python import requests response = requests.post( "https://modelslab.com/api/v7/video-fusion/image-to-video", headers={ "Content-Type": "application/json" }, json={ "key": "YOUR_API_KEY", "model_id": "omni-human-1.5", "init_image": "https://assets.modelslab.com/generations/8931fb55-905f-4ae8-8924-1b4e583ff789.png", "init_audio": "https://assets.modelslab.com/generations/7e1221ae-c5a9-4b1a-96cb-3448cc73c6e3.m4a", "prompt": "The camera zoomed in. The woman spoke to the camera, and after finishing, she quickly turned around and ran backward." } ) print(response.json()) ``` ### JavaScript ```javascript fetch("https://modelslab.com/api/v7/video-fusion/image-to-video", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ "key": "YOUR_API_KEY", "model_id": "omni-human-1.5", "init_image": "https://assets.modelslab.com/generations/8931fb55-905f-4ae8-8924-1b4e583ff789.png", "init_audio": "https://assets.modelslab.com/generations/7e1221ae-c5a9-4b1a-96cb-3448cc73c6e3.m4a", "prompt": "The camera zoomed in. The woman spoke to the camera, and after finishing, she quickly turned around and ran backward." }) }) .then(response => response.json()) .then(data => console.log(data)); ``` ## Links - [Model Playground](https://modelslab.com/models/omnihuman-1.5/omni-human-1.5) - [API Documentation](https://docs.modelslab.com) - [ModelsLab Platform](https://modelslab.com)