Create & Edit Images Instantly with Google Nano Banana 2

Try Nano Banana 2 Now
Skip to main content

ModelsLab with Vercel AI SDK: No Package Required

Adhik JoshiAdhik Joshi
||7 min read|API
ModelsLab with Vercel AI SDK: No Package Required

Integrate AI APIs Today

Build next-generation applications with ModelsLab's enterprise-grade AI APIs for image, video, audio, and chat generation

Get Started
Get Started

Why Add ModelsLab to Vercel AI SDK?

Vercel's AI SDK ships with image providers for DALL-E and a handful of Imagen/Stability models. That's useful if you want one or two models. It's not enough if you're building a product where model variety matters — character consistency, photorealism, anime style, product photography, inpainting.

ModelsLab gives you 50,000+ models under a single API key. FLUX.1 Schnell for speed. SDXL Turbo for style variety. Fluxgram for portrait consistency. And pricing starting at $0.0047 per image (RealTime Text To Image) — versus $0.04+ from OpenAI's DALL-E 3.

The AI SDK's generateImage() function works with any provider that implements the ImageModelV1 interface. This post shows you how to build that adapter, drop it into a Next.js app, and start generating images through ModelsLab from your existing AI SDK code.

Prerequisites

Step 1: Build the ModelsLab Provider Adapter

The AI SDK exposes an ImageModelV1 interface from @ai-sdk/provider. Any class that satisfies this interface can be passed to generateImage().

Create lib/modelslab.ts in your project:

// lib/modelslab.ts
import type { ImageModelV1, ImageModelV1CallOptions } from '@ai-sdk/provider'

/**
 * ModelsLab realtime image generation model.
 * Wraps the /api/v6/realtime/text2img endpoint.
 */
export class ModelsLabImageModel implements ImageModelV1 {
  readonly specificationVersion = 'v1' as const
  readonly modelId: string

  constructor(
    modelId: string,
    private config: { apiKey: string; baseUrl?: string }
  ) {
    this.modelId = modelId
  }

  get provider(): string {
    return 'modelslab'
  }

  get maxImagesPerCall(): number {
    return 4
  }

  async doGenerate(options: ImageModelV1CallOptions) {
    const { prompt, n = 1, size, seed } = options

    // Parse "1024x1024" format or default to 512x512
    const [width, height] = size
      ? size.split('x').map(Number)
      : [512, 512]

    const baseUrl =
      this.config.baseUrl ?? 'https://modelslab.com/api/v6'

    const response = await fetch(`${baseUrl}/realtime/text2img`, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({
        key: this.config.apiKey,
        model_id: this.modelId,
        prompt,
        width: String(width),
        height: String(height),
        samples: n,
        num_inference_steps: 20,
        safety_checker: false,
        enhance_prompt: 'yes',
        seed: seed ?? null,
      }),
    })

    if (!response.ok) {
      throw new Error(
        `ModelsLab API error ${response.status}: ${await response.text()}`
      )
    }

    const data = await response.json()

    if (data.status === 'error') {
      throw new Error(`ModelsLab: ${data.message}`)
    }

    // Output URLs need to be fetched and converted to base64
    const images = await Promise.all(
      (data.output as string[]).map(async (url) => {
        const imgResponse = await fetch(url)
        const buffer = await imgResponse.arrayBuffer()
        const uint8Array = new Uint8Array(buffer)
        const base64 = Buffer.from(buffer).toString('base64')
        return { base64, uint8Array }
      })
    )

    return { images }
  }
}

/**
 * Factory — mirrors the pattern of other AI SDK providers.
 *
 * const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!)
 * modelslab.image('flux-1-schnell')
 */
export function createModelsLab(apiKey: string, baseUrl?: string) {
  return {
    image(modelId: string) {
      return new ModelsLabImageModel(modelId, { apiKey, baseUrl })
    },
  }
}

A few things worth noting about this adapter:

  • specificationVersion: 'v1' — required by the AI SDK provider contract.
  • maxImagesPerCall: 4 — ModelsLab's realtime endpoint supports up to 4 samples per request. The AI SDK batches automatically when you request more with n.
  • Image fetching — ModelsLab returns CDN URLs, not base64. The adapter fetches each URL and converts it so the AI SDK gets the consistent { base64, uint8Array } format.
  • Error handling — checks both HTTP status and the status: 'error' JSON response that ModelsLab uses for API-level errors.

Step 2: Basic Usage With generateImage()

Once you have the adapter, using it is identical to any other AI SDK image provider:

import { generateImage } from 'ai'
import { createModelsLab } from '@/lib/modelslab'
import fs from 'node:fs'

const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!)

async function main() {
  const { image } = await generateImage({
    model: modelslab.image('flux-1-schnell'),
    prompt: 'A developer at a desk surrounded by multiple monitors, dark theme, cinematic lighting',
    size: '1024x1024',
  })

  // Save to disk
  fs.writeFileSync('output.png', image.uint8Array)
  console.log('Image saved — base64 length:', image.base64.length)
}

main()

Generate multiple images in one call:

const { images } = await generateImage({
  model: modelslab.image('sdxl'),
  prompt: 'Product photography of a sleek wireless keyboard on a white background',
  size: '1024x1024',
  n: 4, // generates 4 variants
})

images.forEach((img, i) => {
  fs.writeFileSync(`product-${i}.png`, img.uint8Array)
})

Step 3: Next.js API Route for Client-Side Image Generation

The typical pattern is a server-side API route that accepts a prompt from your frontend and returns the generated image as base64 data. This keeps your API key off the client.

// app/api/generate-image/route.ts
import { generateImage } from 'ai'
import { createModelsLab } from '@/lib/modelslab'
import { NextRequest, NextResponse } from 'next/server'

const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!)

// Model registry — keeps client requests from specifying arbitrary model IDs
const ALLOWED_MODELS: Record<string, string> = {
  fast: 'flux-1-schnell',
  quality: 'flux-1-dev',
  portrait: 'fluxgram-v1-0',
  sdxl: 'sdxl',
}

export async function POST(req: NextRequest) {
  const { prompt, model = 'fast', size = '1024x1024' } = await req.json()

  if (!prompt || typeof prompt !== 'string') {
    return NextResponse.json({ error: 'prompt required' }, { status: 400 })
  }

  const modelId = ALLOWED_MODELS[model]
  if (!modelId) {
    return NextResponse.json({ error: 'invalid model' }, { status: 400 })
  }

  try {
    const { image } = await generateImage({
      model: modelslab.image(modelId),
      prompt,
      size: size as `${number}x${number}`,
    })

    return NextResponse.json({
      base64: image.base64,
      mediaType: 'image/png',
    })
  } catch (error) {
    console.error('ModelsLab generateImage error:', error)
    return NextResponse.json(
      { error: 'image generation failed' },
      { status: 500 }
    )
  }
}

Client-side usage from a React component:

'use client'
import { useState } from 'react'

export function ImageGenerator() {
  const [prompt, setPrompt] = useState('')
  const [src, setSrc] = useState<string | null>(null)
  const [loading, setLoading] = useState(false)

  async function generate() {
    setLoading(true)
    const res = await fetch('/api/generate-image', {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify({ prompt, model: 'fast' }),
    })
    const data = await res.json()
    setSrc(`data:image/png;base64,${data.base64}`)
    setLoading(false)
  }

  return (
    <div>
      <textarea value={prompt} onChange={e => setPrompt(e.target.value)} />
      <button onClick={generate} disabled={loading}>
        {loading ? 'Generating...' : 'Generate'}
      </button>
      {src && <img src={src} alt="Generated" />}
    </div>
  )
}

Model Selection Guide

ModelsLab's catalog has 50,000+ models. Here are the ones worth reaching for first:

  • flux-1-schnell — fastest option, 4-step generation, good for prototyping. Use when latency matters more than quality.
  • flux-1-dev — higher quality FLUX.1 variant, 28-step generation. Use for production image generation where you need detail.
  • fluxgram-v1-0 — character-consistent generation. Use when you're generating the same person across multiple images (product demos, character design).
  • sdxl — SDXL base, reliable for general-purpose generation. Large community of fine-tuned variants if you need a specific style.
  • realistic-vision-v6 — photorealistic output. Use for product photography, lifestyle images, anything that needs to look like a real photo.

You can browse the full catalog at modelslab.com/models and swap any model slug into the modelId parameter above.

Environment Variables

Add to your .env.local:

MODELSLAB_API_KEY=your_api_key_here

Get your key from the ModelsLab dashboard. PAYG pricing — no subscription required.

Switching Providers Without Changing Code

One of the core benefits of the AI SDK's provider abstraction is that swapping providers is a one-line change. If you later want to compare ModelsLab output against DALL-E 3 or Stability AI:

// Compare providers without touching your application logic
const provider = useModelsLab
  ? modelslab.image('flux-1-dev')
  : openai.image('dall-e-3')

const { image } = await generateImage({
  model: provider,
  prompt,
  size: '1024x1024',
})

Same generateImage() call. Different providers. This is the abstraction that makes multi-provider architectures practical in production.

What's Next

This adapter covers the straightforward text-to-image path. ModelsLab also supports:

  • Image-to-image via /api/v6/image_to_image — pass an init image and strength parameter
  • Inpainting — mask a region and regenerate it
  • ControlNet — pose-guided and depth-guided generation
  • LoRA fine-tunes — load custom LoRA weights for brand-specific styles

Each of these maps to a different endpoint and can be wrapped the same way as the adapter above. The providerOptions parameter in generateImage() is the right place to pass model-specific parameters like init_image or mask_image once you extend the adapter to support them.

Start with generateImage() and a single model. Get it generating. Then expand from there as your use case grows.

Get your free ModelsLab API key →

Share:
Adhik Joshi

Written by

Adhik Joshi

Plugins

Explore Plugins for Pro

Our plugins are designed to work with the most popular content creation software.

API

Build Apps with
ML
API

Use our API to build apps, generate AI art, create videos, and produce audio with ease.