Deploy Dedicated GPU server to run AI models

Deploy Model
Skip to main content

ModelsLab with Vercel AI SDK: No Package Required

||7 min read|API
ModelsLab with Vercel AI SDK: No Package Required

Integrate AI APIs Today

Build next-generation applications with ModelsLab's enterprise-grade AI APIs for image, video, audio, and chat generation

Get Started
Get Started

Why Add ModelsLab to Vercel AI SDK?

Vercel's AI SDK ships with image providers for DALL-E and a handful of Imagen/Stability models. That's useful if you want one or two models. It's not enough if you're building a product where model variety matters — character consistency, photorealism, anime style, product photography, inpainting.

ModelsLab gives you 50,000+ models under a single API key. FLUX.1 Schnell for speed. SDXL Turbo for style variety. Fluxgram for portrait consistency. And pricing starting at $0.0047 per image (RealTime Text To Image) — versus $0.04+ from OpenAI's DALL-E 3.

The AI SDK's generateImage() function works with any provider that implements the ImageModelV1 interface. This post shows you how to build that adapter, drop it into a Next.js app, and start generating images through ModelsLab from your existing AI SDK code.

Prerequisites

Step 1: Build the ModelsLab Provider Adapter

The AI SDK exposes an ImageModelV1 interface from @ai-sdk/provider. Any class that satisfies this interface can be passed to generateImage().

Create lib/modelslab.ts in your project:

// lib/modelslab.ts
import type { ImageModelV1, ImageModelV1CallOptions } from '@ai-sdk/provider'
/**
  • ModelsLab realtime image generation model.

  • Wraps the /api/v6/realtime/text2img endpoint. */ export class ModelsLabImageModel implements ImageModelV1 { readonly specificationVersion = 'v1' as const readonly modelId: string

constructor( modelId: string, private config: { apiKey: string; baseUrl?: string } ) { this.modelId = modelId }

get provider(): string { return 'modelslab' }

get maxImagesPerCall(): number { return 4 }

async doGenerate(options: ImageModelV1CallOptions) { const { prompt, n = 1, size, seed } = options

// Parse "1024x1024" format or default to 512x512 const [width, height] = size ? size.split('x').map(Number) : [512, 512]

const baseUrl = this.config.baseUrl ?? 'https://modelslab.com/api/v6'

const response = await fetch(${baseUrl}/realtime/text2img, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ key: this.config.apiKey, model_id: this.modelId, prompt, width: String(width), height: String(height), samples: n, num_inference_steps: 20, safety_checker: false, enhance_prompt: 'yes', seed: seed ?? null, }), })

if (!response.ok) { throw new Error( ModelsLab API error ${response.status}: ${await response.text()} ) }

const data = await response.json()

if (data.status === 'error') { throw new Error(ModelsLab: ${data.message}) }

// Output URLs need to be fetched and converted to base64 const images = await Promise.all( (data.output as string[]).map(async (url) => { const imgResponse = await fetch(url) const buffer = await imgResponse.arrayBuffer() const uint8Array = new Uint8Array(buffer) const base64 = Buffer.from(buffer).toString('base64') return { base64, uint8Array } }) )

return { images }

} }

/**

  • Factory — mirrors the pattern of other AI SDK providers.

  • const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!)

  • modelslab.image('flux-1-schnell') */ export function createModelsLab(apiKey: string, baseUrl?: string) { return { image(modelId: string) { return new ModelsLabImageModel(modelId, { apiKey, baseUrl }) }, } }

A few things worth noting about this adapter:

  • specificationVersion: 'v1' — required by the AI SDK provider contract.
  • maxImagesPerCall: 4 — ModelsLab's realtime endpoint supports up to 4 samples per request. The AI SDK batches automatically when you request more with n.
  • Image fetching — ModelsLab returns CDN URLs, not base64. The adapter fetches each URL and converts it so the AI SDK gets the consistent { base64, uint8Array } format.
  • Error handling — checks both HTTP status and the status: 'error' JSON response that ModelsLab uses for API-level errors.

Step 2: Basic Usage With generateImage()

Once you have the adapter, using it is identical to any other AI SDK image provider:

import { generateImage } from 'ai'
import { createModelsLab } from '@/lib/modelslab'
import fs from 'node:fs'
const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!)

async function main() { const { image } = await generateImage({ model: modelslab.image('flux-1-schnell'), prompt: 'A developer at a desk surrounded by multiple monitors, dark theme, cinematic lighting', size: '1024x1024', })

// Save to disk fs.writeFileSync('output.png', image.uint8Array) console.log('Image saved — base64 length:', image.base64.length) }

main()

Generate multiple images in one call:

const { images } = await generateImage({
  model: modelslab.image('sdxl'),
  prompt: 'Product photography of a sleek wireless keyboard on a white background',
  size: '1024x1024',
  n: 4, // generates 4 variants
})
images.forEach((img, i) => {
fs.writeFileSync(product-${i}.png, img.uint8Array)
})

Step 3: Next.js API Route for Client-Side Image Generation

The typical pattern is a server-side API route that accepts a prompt from your frontend and returns the generated image as base64 data. This keeps your API key off the client.

// app/api/generate-image/route.ts
import { generateImage } from 'ai'
import { createModelsLab } from '@/lib/modelslab'
import { NextRequest, NextResponse } from 'next/server'
const modelslab = createModelsLab(process.env.MODELSLAB_API_KEY!)

// Model registry — keeps client requests from specifying arbitrary model IDs const ALLOWED_MODELS: Record<string, string> = { fast: 'flux-1-schnell', quality: 'flux-1-dev', portrait: 'fluxgram-v1-0', sdxl: 'sdxl', }

export async function POST(req: NextRequest) { const { prompt, model = 'fast', size = '1024x1024' } = await req.json()

if (!prompt || typeof prompt !== 'string') { return NextResponse.json({ error: 'prompt required' }, { status: 400 }) }

const modelId = ALLOWED_MODELS[model] if (!modelId) { return NextResponse.json({ error: 'invalid model' }, { status: 400 }) }

try { const { image } = await generateImage({ model: modelslab.image(modelId), prompt, size: size as ${number}x${number}, })

return NextResponse.json({ base64: image.base64, mediaType: 'image/png', })

} catch (error) { console.error('ModelsLab generateImage error:', error) return NextResponse.json( { error: 'image generation failed' }, { status: 500 } ) } }

Client-side usage from a React component:

'use client'
import { useState } from 'react'
export function ImageGenerator() {
const [prompt, setPrompt] = useState('')
const [src, setSrc] = useState(null)
const [loading, setLoading] = useState(false)

async function generate() { setLoading(true) const res = await fetch('/api/generate-image', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ prompt, model: 'fast' }), }) const data = await res.json() setSrc(data:image/png;base64,${data.base64}) setLoading(false) }

return (