---
title: Happyhorse 1.0 Video Edit API | Video Generation | ModelsLab
description: Edit videos effortlessly with HappyHorse Video Edit. Use natural language instructions and up to 5 reference images to perform precise local or global edits while preserving original motion and quality.
url: https://modelslab.com/models/alibaba_cloud/happyhorse-video-to-video.md
canonical: https://modelslab.com/models/alibaba_cloud/happyhorse-video-to-video.md
type: product
component: Playground/Endpoint/Index
generated_at: 2026-04-28T01:34:05.174822Z
---

[![happyhorse-1.0-video-edit thumbnail](https://assets.topadvisor.com/media/_solution_logo_09222023_8164684.png)](https://modelslab.com/models/alibaba_cloud)Happyhorse-1.0-Video-Edit
---

[by Alibaba](https://modelslab.com/models/alibaba_cloud)HappyHorse-Video-Edit supports advanced video editing through natural language instructions. It allows for local or global editing of video elements using up to 5 reference images, precisely preserving original motion dynamics to achieve.

`happyhorse-1.0-v2v`

Closed Source Model [LLMs.txt](https://modelslab.com/models/alibaba_cloud/happyhorse-1.0-v2v/llms.txt)

[API Playground](/models/alibaba_cloud/happyhorse-video-to-video) [API Documentation](/models/alibaba_cloud/happyhorse-video-to-video/api)Vibe CodeRelated ModelsDeveloper SupportModel Specs

Input
---

Prompt 

Make the scene look like a sunset with warm golden light

Video URL 

Upload

File preview

Duration 

Resolution 

Advanced Settings Customize your input with more control.

Configure

Add FundsLogin to Generate

For every second of **720p** video you generated, you will be charged **$0.35/second**. For **1080p** video you will be charged **$0.7/second.**

Output
---

Idle

Unknown content type

Related Models
---

Discover similar models you might be interested in

 [View all Video Models](https://modelslab.com/models?feature=videogen)

[![Veo 3.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/044f763e-8b9a-49a9-9c7c-317de8b13c9a.webp)](https://modelslab.com/models/google/veo-3.1)[Google](https://modelslab.com/models/google)

 [Veo 3.1

Closed Source Model](https://modelslab.com/models/google/veo-3.1)

[![Veo 2](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/72987a72-d15f-4584-b921-8c9d8dce9bd2.webp)](https://modelslab.com/models/google/veo2)[Google](https://modelslab.com/models/google)

 [Veo 2

Closed Source Model](https://modelslab.com/models/google/veo2)

[![Seedance 1.0 Pro Fast Text to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9a86a500-9dee-4489-858e-39fe9d9f3066.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Fast Text to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-fast-t2v)

[![Wan 2.5 Text to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/01de6ee6-6738-4a78-a25c-b0a30e72e2e9.webp)](https://modelslab.com/models/alibaba_cloud/wan2.5-t2v)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [Wan 2.5 Text to Video

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.5-t2v)

[![Omnihuman-1.5](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/4a33f63d-000a-4b4b-a4a6-1139a81e73fc.webp)](https://modelslab.com/models/byteplus/omni-human-1.5)[Bytedance](https://modelslab.com/models/byteplus)

 [Omnihuman-1.5

Closed Source Model](https://modelslab.com/models/byteplus/omni-human-1.5)

[![Seedance 1.0 Pro Image to Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/c7e4b47a-4c1c-46cb-b662-c939273bf876.webp)](https://modelslab.com/models/byteplus/seedance-1.0-pro-i2v)[Bytedance](https://modelslab.com/models/byteplus)

 [Seedance 1.0 Pro Image to Video

Closed Source Model](https://modelslab.com/models/byteplus/seedance-1.0-pro-i2v)

[![LTX 2 PRO Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/c1f3b2ed-f802-4ce1-a425-5fbce6ffd5e3.webp)](https://modelslab.com/models/ltx/ltx-2-pro-i2v)[ltx](https://modelslab.com/models/ltx)

 [LTX 2 PRO Image To Video

Closed Source Model](https://modelslab.com/models/ltx/ltx-2-pro-i2v)

[![Sora 2 Pro Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/28616b8b-e430-4ce6-b693-768f995ec195.webp)](https://modelslab.com/models/openai/sora-2-pro-t2v)[Open Ai](https://modelslab.com/models/openai)

 [Sora 2 Pro Text To Video

Closed Source Model](https://modelslab.com/models/openai/sora-2-pro-t2v)

[![Veo 3 Fast](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/be91331a-e169-469a-af97-7552fc1510ee.webp)](https://modelslab.com/models/google/veo-3.0-fast-generate)[Google](https://modelslab.com/models/google)

 [Veo 3 Fast

Closed Source Model](https://modelslab.com/models/google/veo-3.0-fast-generate)

[![Kling V2.1  Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/17b21170-bd49-4758-b9c0-9475512d71d6.webp)](https://modelslab.com/models/klingai/kling-v2-1-i2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2.1 Image To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-1-i2v)

[![Gen4 Text to Image Turbo](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/902f4680-f503-4e94-b845-b779b189ec67.webp)](https://modelslab.com/models/runway_ml/gen4_turbo)[Runway ML](https://modelslab.com/models/runway_ml)

 [Gen4 Text to Image Turbo

Closed Source Model](https://modelslab.com/models/runway_ml/gen4_turbo)

[![Grok Imagine Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f132b8f2-c565-4aa5-a09e-e022044b3106.png)](https://modelslab.com/models/xai/grok-imagine-video-i2v)[xAI](https://modelslab.com/models/xai)

 [Grok Imagine Image To Video

Closed Source Model](https://modelslab.com/models/xai/grok-imagine-video-i2v)

[![Kling V2.5 Turbo Text To Video](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/a66ac4fb-91df-4b23-ae0c-126c5f1f38a8.webp)](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)[KlingAI](https://modelslab.com/models/klingai)

 [Kling V2.5 Turbo Text To Video

Closed Source Model](https://modelslab.com/models/klingai/kling-v2-5-turbo-t2v)

[![Omnihuman](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/9bf40816-3e9c-40a7-9f2d-e238b1fc43d3.webp)](https://modelslab.com/models/byteplus/omni-human)[Bytedance](https://modelslab.com/models/byteplus)

 [Omnihuman

Closed Source Model](https://modelslab.com/models/byteplus/omni-human)

[![wan2.6 Image To Video (Flash)](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/118afa95-ea11-4749-8b78-d513b799afcd.webp)](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v-flash)[Alibaba](https://modelslab.com/models/alibaba_cloud)

 [wan2.6 Image To Video (Flash)

Closed Source Model](https://modelslab.com/models/alibaba_cloud/wan2.6-i2v-flash)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![LTX 2 PRO Text To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/f454ab2a-00f1-4e92-afbf-5b8b3775ac35.webp)](https://modelslab.com/models/ltx/ltx-2-pro-t2v)[ltx](https://modelslab.com/models/ltx)

 [LTX 2 PRO Text To Video

Closed Source Model](https://modelslab.com/models/ltx/ltx-2-pro-t2v)

[![Hailuo 02 Image To Video ](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/89f858fe-f734-4c69-a6e0-e4eff44a6417.webp)](https://modelslab.com/models/minimax/Hailuo-02-i2v)[Minmax](https://modelslab.com/models/minimax)

 [Hailuo 02 Image To Video

Closed Source Model](https://modelslab.com/models/minimax/Hailuo-02-i2v)

Open Source Alternatives
---

Explore open-source models that offer similar capabilities with full transparency and flexibility

 [View all open source models](https://modelslab.com/models?feature=video&provider=open-source-models)

[![SVD](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/vRDgVJCkNyxWUSxclogvFeMWlgP9rV-metac3ZkLndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/svd)[ModelsLab](https://modelslab.com/models/modelslab)

 [SVD

Open Source Model](https://modelslab.com/models/modelslab/svd)

[![CogVideoX](https://images.stablediffusionapi.com/?Image=https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/livewire-tmp/VmhYqa98ohanHj8vL6mQjkr5TG2sSS-metaY29ndmlkZW94LndlYnA=-.webp)Popular](https://modelslab.com/models/modelslab/cogvideox)[ModelsLab](https://modelslab.com/models/modelslab)

 [CogVideoX

Open Source Model](https://modelslab.com/models/modelslab/cogvideox)

[![wan2.1](https://images.stablediffusionapi.com/?Image=https://assets.modelslab.ai/generations/04d08a15-bc50-43e7-96e5-5342c249cf50.webp)](https://modelslab.com/models/modelslab/wan2.1)[ModelsLab](https://modelslab.com/models/modelslab)

 [wan2.1

Open Source Model](https://modelslab.com/models/modelslab/wan2.1)

About Happyhorse-1.0-Video-Edit
---

Happyhorse-1.0-Video-Edit is a video generation AI model by Alibaba\_cloud available on ModelsLab. Access Happyhorse-1.0-Video-Edit through our API with pay-per-use pricing and no minimum commitments.

### Technical Specifications

Model IDhappyhorse-1.0-v2vProviderAlibaba\_cloudCategoryVideo ModelsTaskVideo GenerationPrice$0.350000 per multiplierAddedApril 27, 2026

### Key Features

- AI video generation from text or image input
- Motion control and camera movement parameters
- Adjustable frame rate and video duration
- High-quality cinematic output up to 1080p
- Native audio generation support

### Quick Start

Integrate Happyhorse-1.0-Video-Edit into your application with a single API call. Get your API key from the [pricing page](https://modelslab.com/pricing) to get started.

PythonJavaScriptcURLPHP

```
<code>import requests
import json

url = "https://modelslab.com/api/v7/video-fusion/video-to-video"

headers = {
    "Content-Type": "application/json"
}

data = {
        "model_id": "happyhorse-1.0-v2v",
        "prompt": "your prompt here",
        "key": "YOUR_API_KEY"
    }

try:
    response = requests.post(url, headers=headers, json=data)
    response.raise_for_status()  # Raises an HTTPError for bad responses (4XX or 5XX)
    result = response.json()
    print("API Response:")
    print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
    print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
    print(f"Other error occurred: {err}")</code>
```

View the [full API documentation](https://modelslab.com/models/alibaba_cloud/happyhorse-1.0-v2v/api) for SDKs, code examples in Python, JavaScript, and more.

### Pricing

Happyhorse-1.0-Video-Edit API costs $0.350000 per multiplier. Pay only for what you use with no minimum commitments. [View pricing plans](https://modelslab.com/pricing)

### Use Cases

- Marketing and promotional video creation
- Social media short-form video content
- Product demos and explainer videos
- Creative storytelling and animation

[Browse Video Models](https://modelslab.com/models?feature=videogen) [More from Alibaba\_cloud](https://modelslab.com/models/alibaba_cloud) [View Pricing](https://modelslab.com/pricing)

Happyhorse-1.0-Video-Edit FAQ
---

### What is Happyhorse-1.0-Video-Edit?

Happyhorse-1.0-Video-Edit is a video generation AI model by Alibaba\_cloud available on ModelsLab. Access it through our API with pay-per-use pricing and no minimum commitments.

### How do I use the Happyhorse-1.0-Video-Edit API?

You can integrate Happyhorse-1.0-Video-Edit into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "happyhorse-1.0-v2v" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

### How much does Happyhorse-1.0-Video-Edit cost?

Happyhorse-1.0-Video-Edit costs $0.350000 per multiplier. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

### What is the Happyhorse-1.0-Video-Edit model ID?

The model ID for Happyhorse-1.0-Video-Edit is "happyhorse-1.0-v2v". Use this ID in your API requests to specify this model.

### Does Happyhorse-1.0-Video-Edit have a free tier?

Yes, ModelsLab offers a free tier that lets you try Happyhorse-1.0-Video-Edit and other AI models. Sign up to get free API credits and start building immediately.

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-04-28*