---
title: Omnihuman 1.5 API | Image to Video | ModelsLab
description: Generate lifelike, expressive video avatars from a single image and audio, featuring full-body motion, deep semantic sync, and cinematic camera.
url: https://modelslab.com/models/byteplus/omni-human-1.5.md
canonical: https://modelslab.com/models/byteplus/omni-human-1.5.md
type: product
component: Playground/Endpoint/Index
generated_at: 2026-04-15T06:50:14.694307Z
---

[![Omnihuman-1.5 thumbnail](https://assets.modelslab.ai/api-logos/01KMSWQWTME16RN3X66SRK2CAE.png)](https://modelslab.com/models/byteplus)Omnihuman-1.5
---

[by Byteplus](https://modelslab.com/models/byteplus)OmniHuman 1.5 — a film-grade digital human model that turns a single image, audio, and text prompt into lifelike video performances. It supports full prompt input, unrestricted camera and character movement, and intelligent audio understanding for natural, expressive, and story-driven results.

`omni-human-1.5`

Closed Source Model [LLMs.txt](https://modelslab.com/models/byteplus/omni-human-1.5/llms.txt) [Learn more](https://modelslab.com/omnihuman-15)

[API Playground](/models/byteplus/omni-human-1.5) [API Documentation](/models/byteplus/omni-human-1.5/api)Vibe CodeRelated ModelsDeveloper SupportModel Specs

Input
---

Reference Image 

Upload

File preview

Reference Audio 

Upload

File preview

 or record audio

Record Audio

Prompt 

The camera zoomed in. The woman spoke to the camera, and after finishing, she quickly turned around and ran backward.

Advanced Settings Customize your input with more control.

Configure

Add FundsLogin to Generate

**Per second video** generation will cost **0.14$**

Output
---

Idle

Unknown content type

Related Models
---

Discover similar models you might be interested in

 [View all Video Models](https://modelslab.com/models?feature=videogen)

[![Gen4 Aleph (Video Edit)](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/8a41fc4d-9e2c-465f-b1e2-343e7ef681c7.webp)](https://modelslab.com/models/runway_ml/gen4_aleph)[Runway ML](https://modelslab.com/models/runway_ml)

 [Gen4 Aleph (Video Edit)

Closed Source Model](https://modelslab.com/models/runway_ml/gen4_aleph)

[![wan2.6 Image To Video (Flash)](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/118afa95-ea11-4749-8b78-d513b799afcd.webp)](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v-flash)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [wan2.6 Image To Video (Flash)

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v-flash)

[![Seedance Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/5c66f504-1587-4765-b3ca-51813239ce46.webp)](https://modelslab.com/models/byteplus/seedance-i2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance Image to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-i2v)

[![Veo 3.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/044f763e-8b9a-49a9-9c7c-317de8b13c9a.webp)](https://modelslab.com/models/google/veo-3.1)[Google](https://modelslab.com/models/google)

 [Veo 3.1

Closed Source Model](https://modelslab.com/models/google/veo-3.1)

[![Wan2.6 Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9cc6f6ce-e9f2-4908-8009-0f1982af5ff5.png)](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [Wan2.6 Image To Video

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v)

[![wan2.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/04d08a15-bc50-43e7-96e5-5342c249cf50.webp)](https://modelslab.com/models/modelslab/wan2.1)[ModelsLab](https://modelslab.com/models/modelslab)

 [wan2.1

Open Source Model](https://modelslab.com/models/modelslab/wan2.1)

[![Grok Imagine Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/66741ffe-f704-47ef-92d5-32fec90bcc7a.webp)](https://modelslab.com/models/xai/grok-imagine-video-t2v)[xAI](https://modelslab.com/models/xai)

 [Grok Imagine Text To Video

Closed Source Model](https://modelslab.com/models/xai/grok-imagine-video-t2v)

[![Veo 3 Fast preview](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/707c2544-a4e2-4f55-8db9-fd20fb82063b.webp)](https://modelslab.com/models/google/veo-3.0-fast-generate-preview)[Google](https://modelslab.com/models/google)

 [Veo 3 Fast preview

Closed Source Model](https://modelslab.com/models/google/veo-3.0-fast-generate-preview)

[![Hailuo 2.3 Fast Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/e7b31183-59f6-4e12-87dc-1b7eaa3f0be1.webp)](https://modelslab.com/models/minimax/Hailuo-2.3-Fast-i2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 2.3 Fast Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-2.3-Fast-i2v)

[![CogVideoX](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/VmhYqa98ohanHj8vL6mQjkr5TG2sSS-metaY29ndmlkZW94LndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/cogvideox)[ModelsLab](https://modelslab.com/models/modelslab)

 [CogVideoX

Open Source Model](https://modelslab.com/models/modelslab/cogvideox)

[![Sora-2](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/1c99ac56-551e-4ff7-97d2-66a6d58b04f9.webp)](https://modelslab.com/models/openai/sora-2)[Open Ai](https://modelslab.com/models/openai)

 [Sora-2

Closed Source Model](https://modelslab.com/models/openai/sora-2)

[![kling V2.1 Master Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/4c066928-ae15-4d08-99f0-1cdcab52742c.webp)](https://modelslab.com/models/klingai/kling-v2-1-master-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [kling V2.1 Master Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-1-master-t2v)

[![Hailuo 02 Start/End Frame Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/2c685f03-1f3c-412b-b830-3d3113512d43.webp)](https://modelslab.com/models/minimax/Hailuo-02-start-end-frame%20)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 02 Start/End Frame Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-02-start-end-frame%20)

[![Sora 2 Pro Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/28616b8b-e430-4ce6-b693-768f995ec195.webp)](https://modelslab.com/models/openai/sora-2-pro-t2v)[Open Ai](https://modelslab.com/models/openai)

 [Sora 2 Pro Text To Video

Closed Source Model](https://modelslab.com/models/openai/sora-2-pro-t2v)

[![Seedance 1.0 Pro Fast Text to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9a86a500-9dee-4489-858e-39fe9d9f3066.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Fast Text to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)

[![Kling Motion Control](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f8506650-29ee-4c93-92d6-f9ad0a023c06.webp)](https://modelslab.com/models/klingai/kling-motion-control)[KlingAI](https://modelslab.com/models/klingai)

 [Kling Motion Control

Closed Source Model](https://modelslab.com/models/klingai/kling-motion-control)

[![Wan2.6 Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/2ac26148-86c9-4d5c-8146-43a62ec5669a.png)](https://modelslab.com/models/alibaba_cloud/wan2.6-t2v)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [Wan2.6 Text To Video

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.6-t2v)

[![Hailuo 02 Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/89f858fe-f734-4c69-a6e0-e4eff44a6417.webp)](https://modelslab.com/models/minimax/Hailuo-02-i2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 02 Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-02-i2v)

Open Source Alternatives
---

Explore open-source models that offer similar capabilities with full transparency and flexibility

 [View all open source models](https://modelslab.com/models?feature=video&provider=open-source-models)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![CogVideoX](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/VmhYqa98ohanHj8vL6mQjkr5TG2sSS-metaY29ndmlkZW94LndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/cogvideox)[ModelsLab](https://modelslab.com/models/modelslab)

 [CogVideoX

Open Source Model](https://modelslab.com/models/modelslab/cogvideox)

[![wan2.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/04d08a15-bc50-43e7-96e5-5342c249cf50.webp)](https://modelslab.com/models/modelslab/wan2.1)[ModelsLab](https://modelslab.com/models/modelslab)

 [wan2.1

Open Source Model](https://modelslab.com/models/modelslab/wan2.1)

About Omnihuman-1.5
---

OmniHuman 1.5 — a film-grade digital human model that turns a single image, audio, and text prompt into lifelike video performances. It supports full prompt input, unrestricted camera and character movement, and intelligent audio understanding for natural, expressive, and story-driven results.

### Technical Specifications

Model IDomni-human-1.5ProviderByteplusCategoryVideo ModelsTaskVideo GenerationPrice$0.140000 per secondAddedOctober 28, 2025

### Key Features

- AI video generation from text or image input
- Motion control and camera movement parameters
- Adjustable frame rate and video duration
- High-quality cinematic output up to 1080p
- Native audio generation support

### Quick Start

Integrate Omnihuman-1.5 into your application with a single API call. Get your API key from the [pricing page](https://modelslab.com/pricing) to get started.

PythonJavaScriptcURLPHP

```
<code>import requests
import json

url = "https://modelslab.com/api/v7/video-fusion/image-to-video"

headers = {
    "Content-Type": "application/json"
}

data = {
        "model_id": "omni-human-1.5",
        "prompt": "your prompt here",
        "key": "YOUR_API_KEY"
    }

try:
    response = requests.post(url, headers=headers, json=data)
    response.raise_for_status()  # Raises an HTTPError for bad responses (4XX or 5XX)
    result = response.json()
    print("API Response:")
    print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
    print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
    print(f"Other error occurred: {err}")</code>
```

View the [full API documentation](https://modelslab.com/models/byteplus/omni-human-1.5/api) for SDKs, code examples in Python, JavaScript, and more.

### Pricing

Omnihuman-1.5 API costs $0.140000 per second. Pay only for what you use with no minimum commitments. [View pricing plans](https://modelslab.com/pricing)

### Use Cases

- Marketing and promotional video creation
- Social media short-form video content
- Product demos and explainer videos
- Creative storytelling and animation

[Learn more about Omnihuman-1.5](https://modelslab.com/omnihuman-15) [Browse Video Models](https://modelslab.com/models?feature=videogen) [More from Byteplus](https://modelslab.com/models/byteplus) [View Pricing](https://modelslab.com/pricing)

Omnihuman-1.5 FAQ
---

### What is Omnihuman-1.5?

OmniHuman 1.5 — a film-grade digital human model that turns a single image, audio, and text prompt into lifelike video performances. It supports full prompt input, unrestricted camera and character movement, and intelligent audio understanding for natural, expressive, and story-driven results.

### How do I use the Omnihuman-1.5 API?

You can integrate Omnihuman-1.5 into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "omni-human-1.5" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

### How much does Omnihuman-1.5 cost?

Omnihuman-1.5 costs $0.140000 per second. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

### What is the Omnihuman-1.5 model ID?

The model ID for Omnihuman-1.5 is "omni-human-1.5". Use this ID in your API requests to specify this model.

### Does Omnihuman-1.5 have a free tier?

Yes, ModelsLab offers a free tier that lets you try Omnihuman-1.5 and other AI models. Sign up to get free API credits and start building immediately.

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-04-15*