Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content
Available now on ModelsLab · Image Generation

Wan 2.7 Image EditEdit Images Instructionally

Wan 2.7 Image Edit

Control Every Pixel Precisely

Text Instructions

Semantic Image Changes

Change backgrounds or elements via natural language while preserving faces, poses, clothing.

Multi-Reference

Up to Nine Images

Combine up to nine reference images for style transfer, subject swaps, background shaping.

Color Palette

Exact Color Control

Extract or set color schemes from references to match brand guidelines precisely.

Examples

See what Wan 2.7 Image Edit can create

Copy any prompt below and try it yourself in the playground.

Cityscape Sunset

Transform urban street at dusk: replace sky with vibrant sunset, keep buildings and cars intact, cinematic lighting, high detail, photorealistic

Product Mockup

Edit product photo: change background to minimalist white studio, preserve product texture and shadows, professional e-commerce style

Architecture Render

Modern building exterior: swap to rainy night scene, maintain structure and windows, add realistic reflections on wet surfaces

Landscape Fusion

Mountain landscape: blend with forest foreground from reference, golden hour lighting, ultra-detailed nature, serene atmosphere

For Developers

A few lines of code.
Edit images. One call.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per image, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/images/image-to-image",
json={
"key": "YOUR_API_KEY",
"prompt": "girl from image one wearing dress from image two",
"init_image": ""
}
)
print(response.json())

FAQ

Common questions about Wan 2.7 Image Edit

Read the docs

Wan 2.7 Image Edit is an instruction-based model for precise image modifications. It uses text prompts to edit specific elements without altering the rest. Supports multi-reference inputs up to nine images.

Send input image and text instructions via the Wan 2.7 Image Edit API endpoint. It processes semantic edits like background changes or object swaps. Outputs high-quality edited images in formats like PNG.

Features chain-of-thought reasoning for accurate edits and multi-agent consistency. Includes color palette control and superior text rendering. Outperforms mask-based inpainting tools.

Yes, Wan 2.7 Image Edit provides semantic understanding beyond traditional tools. Handles complex multi-reference edits and precise color matching. Ideal for professional workflows.

It supports image-to-image editing with text guidance. Use references for style transfer or element replacement. Maintains pixel-level accuracy on preserved areas.

Outputs PNG, WEBP, TIFF for transparency support. JPG lacks alpha channel. Adjustable quality from 20-99 for file size control.

Ready to create?

Start generating with Wan 2.7 Image Edit on ModelsLab.