Create & Edit Images Instantly with Google Nano Banana 2

Try Nano Banana 2 Now
Skip to main content

Seedream 5.0 API: ByteDance's New Image Model on ModelsLab

Adhik JoshiAdhik Joshi
||5 min read|Image Generation
Seedream 5.0 API: ByteDance's New Image Model on ModelsLab

Integrate AI APIs Today

Build next-generation applications with ModelsLab's enterprise-grade AI APIs for image, video, audio, and chat generation

Get Started
Get Started

ByteDance launched Seedream 5.0 Lite in February 2026 — their latest multimodal image generation model, and it's now available via the ModelsLab API. If you're building image generation pipelines, this gives you a new option: a ByteDance-trained model with strong composition, accurate text rendering, and both text-to-image and image-to-image capabilities under one API key.

This guide covers the API endpoints, request format, Python examples, and when you'd pick Seedream 5.0 over FLUX or Stable Diffusion models.

What Is Seedream 5.0?

Seedream is ByteDance's text-to-image model line — the same team behind Stable Video Diffusion contributions and PixelDance. Version 5.0 Lite is the latest release (February 2026), succeeding Seedream 4.5 with better prompt adherence, cleaner composition, and significantly improved text rendering inside images.

Key strengths compared to FLUX-family models:

  • Text in images: Accurate character-by-character rendering, good for UI mockups, meme generation, or branded content
  • Composition: Better at following multi-subject prompts (person + background + lighting specified together)
  • Image editing: The seedream-5-lite-i2i model accepts a source image and produces targeted edits rather than full regeneration
  • Speed: "Lite" designation means faster inference — useful for high-throughput pipelines

Models Available on ModelsLab

ModelsLab currently hosts two Seedream 5.0 Lite variants:

  • Text-to-Image: Model ID seedream-5-lite-t2i — generate from prompt alone
  • Image-to-Image: Model ID seedream-5-lite-i2i — edit or transform an existing image

Both are available under the ModelsLab model catalog.

Text-to-Image API

Endpoint: POST https://modelslab.com/api/v6/images/text2img

import requests
import json

url = "https://modelslab.com/api/v6/images/text2img"

payload = {
    "key": "YOUR_API_KEY",
    "model_id": "seedream-5-lite-t2i",
    "prompt": "A developer sitting at a dual-monitor setup, dark theme IDE open, with clean overhead lighting and a plant on the desk",
    "negative_prompt": "blurry, low quality, watermark, text artifacts",
    "width": "1024",
    "height": "1024",
    "samples": "1",
    "num_inference_steps": "20",
    "guidance_scale": 7.5,
    "enhance_prompt": "yes",
    "safety_checker": "no",
    "seed": None
}

response = requests.post(url, json=payload)
result = response.json()

if result.get("status") == "success":
    print(f"Image URL: {result['output'][0]}")
elif result.get("status") == "processing":
    # Async job — poll fetch_result endpoint
    fetch_url = result.get("fetch_result")
    print(f"Processing — poll: {fetch_url}")

Async Polling (for processing jobs)

import time

def wait_for_result(fetch_url: str, api_key: str, max_polls: int = 20) -> str:
    """Poll until image is ready, return URL."""
    for _ in range(max_polls):
        resp = requests.post(fetch_url, json={"key": api_key})
        data = resp.json()
        status = data.get("status")
        if status == "success":
            return data["output"][0]
        elif status == "error":
            raise Exception(f"Generation failed: {data.get('message')}")
        time.sleep(3)
    raise TimeoutError("Image generation timed out after polling")

# Usage
fetch_url = "https://modelslab.com/api/v6/images/fetch/JOB_ID"
image_url = wait_for_result(fetch_url, api_key="YOUR_API_KEY")
print(f"Done: {image_url}")

Image-to-Image API

Use seedream-5-lite-i2i when you want to modify an existing image while preserving structure. Good for: style transfer, product photo enhancement, background replacement prompts, and UI iteration.

url = "https://modelslab.com/api/v6/images/img2img"

payload = {
    "key": "YOUR_API_KEY",
    "model_id": "seedream-5-lite-i2i",
    "prompt": "Same scene but with a sunset window view behind the developer, warm orange light",
    "negative_prompt": "low quality, blurry",
    "init_image": "https://your-s3-bucket.com/source-image.jpg",  # URL or base64
    "strength": 0.7,  # 0.0 = no change, 1.0 = full regeneration
    "width": "1024",
    "height": "1024",
    "samples": "1",
    "num_inference_steps": "25",
    "guidance_scale": 7.5
}

response = requests.post(url, json=payload)
result = response.json()

The strength parameter controls how much the output deviates from the source. For targeted edits, keep it between 0.4–0.7. Higher values let the model reinterpret the scene more freely.

curl Examples

# Text-to-image
curl -X POST "https://modelslab.com/api/v6/images/text2img" \
  -H "Content-Type: application/json" \
  -d '{
    "key": "YOUR_API_KEY",
    "model_id": "seedream-5-lite-t2i",
    "prompt": "A mobile app UI wireframe on a tablet, clean minimal design",
    "width": "768",
    "height": "1024",
    "samples": "1",
    "num_inference_steps": "20",
    "guidance_scale": 7
  }'

Seedream 5.0 vs FLUX.1 Schnell: When to Use Which

Both are fast models, but they're optimized for different tasks:

  • Pick Seedream 5.0 when: You need text in the image to be accurate, you're working with multi-subject compositions, or you want image editing (I2I) without a separate inpainting step
  • Pick FLUX.1 Schnell when: You need maximum throughput at lowest cost, or your prompts are primarily photorealistic scenes without text elements
  • Pick Fluxgram V1.0 when: You need consistent character faces across multiple generations (dedicated character identity model)

Seedream 5.0's text rendering advantage is significant for any use case involving labels, UI mockups, product copy, or social media graphics where text accuracy matters.

Rate Limits and Pricing

Seedream 5.0 Lite runs on ModelsLab's standard inference infrastructure. Pricing follows the ModelsLab per-request model — check your dashboard for current per-image rates. For high-volume workloads (>10,000 images/day), reach out to the ModelsLab team about enterprise pricing.

The Lite variant specifically is optimized for throughput, so it's suitable for batch processing pipelines where you're generating multiple variations in parallel.

Integration Pattern: Fallback Chain

A common pattern in production pipelines is to route by content type:

def generate_image(prompt: str, has_text: bool = False, needs_edit: bool = False):
    """Route to best model based on requirements."""
    if needs_edit and init_image:
        model_id = "seedream-5-lite-i2i"
    elif has_text:
        model_id = "seedream-5-lite-t2i"  # Better text rendering
    else:
        model_id = "flux-schnell"  # Fastest for photorealistic
    
    return call_modelslab_api(model_id=model_id, prompt=prompt)

Getting Started

To use Seedream 5.0 via ModelsLab:

  1. Create an account at modelslab.com
  2. Get your API key from the dashboard
  3. Use model ID seedream-5-lite-t2i or seedream-5-lite-i2i in the standard ModelsLab API

No separate account for ByteDance or BytePlus — ModelsLab handles the model infrastructure. One API key, one endpoint, access to Seedream alongside FLUX, Stable Diffusion, and 10,000+ other models.

Share:
Adhik Joshi

Written by

Adhik Joshi

Plugins

Explore Plugins for Pro

Our plugins are designed to work with the most popular content creation software.

API

Build Apps with
ML
API

Use our API to build apps, generate AI art, create videos, and produce audio with ease.