Why Add ModelsLab to Vercel AI SDK?
Vercel's AI SDK ships with image providers for DALL-E and a handful of Imagen/Stability models. That's useful if you want one or two models. It's not enough if you're building a product where model variety matters — character consistency, photorealism, anime style, product photography, inpainting.
ModelsLab gives you 50,000+ models under a single API key. FLUX.1 Schnell for speed. SDXL Turbo for style variety. Fluxgram for portrait consistency. And pricing that starts at $0.002 per image — versus $0.04+ from OpenAI's DALL-E 3.
The AI SDK's generateImage() function works with any provider that implements the ImageModelV1 interface. This post shows you how to build that adapter, drop it into a Next.js app, and start generating images through ModelsLab from your existing AI SDK code.
Prerequisites
- Next.js 14+ app with the App Router
- AI SDK installed:
npm install ai - A ModelsLab API key — get one free at modelslab.com
Step 1: Build the ModelsLab Provider Adapter
The AI SDK exposes an ImageModelV1 interface from @ai-sdk/provider. Any class that satisfies this interface can be passed to generateImage().
Create lib/modelslab.ts in your project:
// lib/modelslab.tsimport type { ImageModelV1, ImageModelV1CallOptions } from '@ai-sdk/provider'/**,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],
A few things worth noting about this adapter:
specificationVersion: 'v1'— required by the AI SDK provider contract.maxImagesPerCall: 4— ModelsLab's realtime endpoint supports up to 4 samples per request. The AI SDK batches automatically when you request more withn.- Image fetching — ModelsLab returns CDN URLs, not base64. The adapter fetches each URL and converts it so the AI SDK gets the consistent
{ base64, uint8Array }format. - Error handling — checks both HTTP status and the
status: 'error'JSON response that ModelsLab uses for API-level errors.
Step 2: Basic Usage With generateImage()
Once you have the adapter, using it is identical to any other AI SDK image provider:
import { generateImage } from 'ai'import { createModelsLab } from '@/lib/modelslab'import fs from 'node:fs'const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!),[object Object],,[object Object],,[object Object],
main()
Generate multiple images in one call:
const { images } = await generateImage({model: modelslab.image('sdxl'),prompt: 'Product photography of a sleek wireless keyboard on a white background',size: '1024x1024',n: 4, // generates 4 variants})images.forEach((img, i) => {fs.writeFileSync(product-${i}.png, img.uint8Array)})
Step 3: Next.js API Route for Client-Side Image Generation
The typical pattern is a server-side API route that accepts a prompt from your frontend and returns the generated image as base64 data. This keeps your API key off the client.
// app/api/generate-image/route.tsimport { generateImage } from 'ai'import { createModelsLab } from '@/lib/modelslab'import { NextRequest, NextResponse } from 'next/server'const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!),[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],,[object Object],
} catch (error) {console.error('ModelsLab generateImage error:', error)return NextResponse.json({ error: 'image generation failed' },{ status: 500 })}}
Client-side usage from a React component:
'use client'import { useState } from 'react'export function ImageGenerator() {const [prompt, setPrompt] = useState('')const [src, setSrc] = useState(null)const [loading, setLoading] = useState(false),[object Object],,[object Object],
return (setPrompt(e.target.value)} />{loading ? 'Generating...' : 'Generate'}{src && ,[object Object],})}
Model Selection Guide
ModelsLab's catalog has 50,000+ models. Here are the ones worth reaching for first:
- flux-1-schnell — fastest option, 4-step generation, good for prototyping. Use when latency matters more than quality.
- flux-1-dev — higher quality FLUX.1 variant, 28-step generation. Use for production image generation where you need detail.
- fluxgram-v1-0 — character-consistent generation. Use when you're generating the same person across multiple images (product demos, character design).
- sdxl — SDXL base, reliable for general-purpose generation. Large community of fine-tuned variants if you need a specific style.
- realistic-vision-v6 — photorealistic output. Use for product photography, lifestyle images, anything that needs to look like a real photo.
You can browse the full catalog at modelslab.com/models and swap any model slug into the modelId parameter above.
Environment Variables
Add to your .env.local:
MODELSLAB_API_KEY=your_api_key_here
Get your key from the ModelsLab dashboard. Free tier includes 100 test images.
Switching Providers Without Changing Code
One of the core benefits of the AI SDK's provider abstraction is that swapping providers is a one-line change. If you later want to compare ModelsLab output against DALL-E 3 or Stability AI:
// Compare providers without touching your application logicconst provider = useModelsLab? modelslab.image('flux-1-dev'): openai.image('dall-e-3')const { image } = await generateImage({model: provider,prompt,size: '1024x1024',})
Same generateImage() call. Different providers. This is the abstraction that makes multi-provider architectures practical in production.
What's Next
This adapter covers the straightforward text-to-image path. ModelsLab also supports:
- Image-to-image via
/api/v6/image_to_image— pass an init image and strength parameter - Inpainting — mask a region and regenerate it
- ControlNet — pose-guided and depth-guided generation
- LoRA fine-tunes — load custom LoRA weights for brand-specific styles
Each of these maps to a different endpoint and can be wrapped the same way as the adapter above. The providerOptions parameter in generateImage() is the right place to pass model-specific parameters like init_image or mask_image once you extend the adapter to support them.
Start with generateImage() and a single model. Get it generating. Then expand from there as your use case grows.
