Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Video Generation

Gen4 Aleph (Video Edit)Edit footage. Not pixels.

Transform Video. Keep Context.

In-Context Editing

Preserve Scene Coherence

Analyzes lighting, spatial relationships, and motion to integrate edits seamlessly with original footage.

Multi-Task Capability

Object, Lighting, Angle Control

Remove, add, or relocate objects; adjust lighting; generate new camera angles from single shots.

Fast Turnaround

60-120 Second Output

Processes first 5 seconds of input video, supporting up to 16MB files across multiple aspect ratios.

Examples

See what Gen4 Aleph (Video Edit) can create

Copy any prompt below and try it yourself in the playground.

Weather Transformation

Change the scene to winter with snow falling gently on the landscape, maintaining the original lighting and composition.

Object Removal

Remove the distracting shadow from the left side of the frame and smooth the background naturally.

Camera Angle Shift

Generate an over-the-shoulder reverse angle of this shot, maintaining consistent lighting and motion.

VFX Enhancement

Add subtle lens flares and enhance the color grading to create a more cinematic, warm tone.

For Developers

A few lines of code.
Edit video. Three lines.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per second, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/video-fusion/video-to-video",
json={
"key": "YOUR_API_KEY",
"prompt": "make it winter",
"init_video": "https://assets.modelslab.ai/generations/069b5d64-3699-4bc5-98bd-46e30ded661a.mp4",
"aspect_ratio": "1280:720"
}
)
print(response.json())

FAQ

Common questions about Gen4 Aleph (Video Edit)

Read the docs

Gen4 Aleph is an in-context video editing model designed for post-production tasks like object removal, relighting, VFX addition, camera angle generation, and environmental changes. It transforms existing footage via text prompts without regenerating scenes from scratch.

Unlike models that generate video from text or images, Gen4 Aleph enhances real footage by understanding scene context—lighting, spatial arrangement, object relationships—before applying edits. This ensures coherent, realistic results integrated seamlessly with original footage.

Gen4 Aleph processes input videos up to 16MB, using the first 5 seconds for generation. It supports multiple aspect ratios: 16:9, 9:16, 1:1, 4:3, and 3:4 for platform versatility.

Yes, Gen4 Aleph accepts up to three reference images to guide the style, mood, or content of the output, enabling more precise creative control over transformations.

Gen4 Aleph delivers high-quality video edits in 60-120 seconds, making it ideal for rapid prototyping and quick turnaround workflows in production environments.

The Gen4 Aleph API enables object manipulation, scene generation, environmental changes, camera angle generation, relighting, VFX insertion, and weather/season modifications—all controlled through detailed text prompts.

Ready to create?

Start generating with Gen4 Aleph (Video Edit) on ModelsLab.