Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content

ModelsLab with Vercel AI SDK: No Package Required 2026

||10 min read|API
ModelsLab with Vercel AI SDK: No Package Required 2026

Start Building with ModelsLab APIs

One API key. 100,000+ models. Image, video, audio, and LLM generation.

99.9% UptimePay-as-you-goFree tier available
Get Started

Why This Approach Works — And Why You Don't Need a Custom Package

If you've used the Vercel AI SDK, you know it provides a unified interface for working with AI models from different providers. What you might not know is that ModelsLab's chat and text generation API is fully OpenAI-compatible. That means you can plug ModelsLab into the AI SDK using either the @ai-sdk/openai provider or the @ai-sdk/openai-compatible provider — no custom package, no wrapper library, no maintenance burden.

Just point the baseURL at https://modelslab.com/api/v1, pass your ModelsLab API key, and you instantly get access to 200+ language models including Qwen 3.5, Llama 4, DeepSeek R1, and Mistral Large through the same generateText(), streamText(), and useChat() functions you already use.

This guide walks you through the complete setup — from installation to a production-ready Next.js chatbot.

Prerequisites

Before you begin, make sure you have:

  • Node.js 18+ installed
  • A Next.js 14+ project (App Router recommended)
  • A ModelsLab API keyget one free from your dashboard
  • Basic familiarity with the Vercel AI SDK

Step 1: Install the Required Packages

You need the AI SDK core package and one of two provider packages. We recommend @ai-sdk/openai-compatible for the cleanest setup with non-OpenAI providers:

bash
npm install ai @ai-sdk/openai-compatible @ai-sdk/react

Alternatively, if you already use @ai-sdk/openai in your project, you can reuse it by overriding the baseURL — we'll show both approaches below.

Step 2: Configure the ModelsLab Provider

Option A: Using @ai-sdk/openai-compatible (Recommended)

This is the purpose-built package for connecting OpenAI-compatible APIs. Create a provider configuration file:

typescript
// lib/modelslab.ts
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
typescript
export const modelslab = createOpenAICompatible({
name: 'modelslab',
baseURL: 'https://modelslab.com/api/v1',
apiKey: process.env.MODELSLAB_API_KEY!,
});

Option B: Using @ai-sdk/openai

If @ai-sdk/openai is already installed in your project, you can override its base URL:

typescript
// lib/modelslab.ts
import { createOpenAI } from '@ai-sdk/openai';
typescript
export const modelslab = createOpenAI({
baseURL: 'https://modelslab.com/api/v1',
apiKey: process.env.MODELSLAB_API_KEY!,
});

Both options produce a provider instance you can use interchangeably with generateText() and streamText().

Environment Variables

Add your API key to .env.local:

MODELSLAB_API_KEY=your_api_key_here

Get your key from the ModelsLab dashboard. Pay-as-you-go pricing — no subscription required.

Step 3: Generate Text (Non-Streaming)

The simplest use case. Call generateText() with any ModelsLab model:

typescript
import { generateText } from 'ai';
import { modelslab } from '@/lib/modelslab';
,[object Object],
,[object Object],
typescript
main();

You can also pass a system message for more control:

typescript
const { text } = await generateText({
model: modelslab.chatModel('meta-llama/Llama-4-Scout-17B-16E-Instruct'),
system: 'You are a senior software architect. Give concise, actionable advice.',
prompt: 'What are the top three things to consider when designing a microservices architecture?',
});

Step 4: Stream Text in Real Time

For interactive applications, streaming delivers tokens to the user as they're generated instead of waiting for the full response. Use streamText():

typescript
import { streamText } from 'ai';
import { modelslab } from '@/lib/modelslab';
,[object Object],
,[object Object],
typescript
main();

The streamText() function also supports callbacks for monitoring:

typescript
const result = streamText({
model: modelslab.chatModel('Qwen/Qwen3.5-32B-Instruct'),
prompt: 'Summarize the key principles of clean architecture.',
onChunk({ chunk }) {
if (chunk.type === 'text') {
// Track chunks for analytics, logging, etc.
}
},
onFinish({ text, usage, finishReason }) {
console.log('Tokens used:', usage);
console.log('Finish reason:', finishReason);
},
});

Step 5: Build a Next.js Chat Application

This is where it all comes together. We'll build a streaming chatbot with a Next.js App Router API route and a React frontend using the useChat hook.

API Route

Create the server-side route handler:

typescript
// app/api/chat/route.ts
import { streamText, UIMessage, convertToModelMessages } from 'ai';
import { modelslab } from '@/lib/modelslab';
,[object Object],
,[object Object],
,[object Object],
,[object Object],
typescript
return result.toUIMessageStreamResponse();
}

React Chat Component

Build the frontend with the useChat hook from @ai-sdk/react:

tsx
// app/page.tsx
'use client';
,[object Object],
,[object Object],
,[object Object],
,[object Object],[object Object],
tsx
);
}

Run npm run dev and open http://localhost:3000. You have a fully streaming chatbot powered by ModelsLab — built with the standard Vercel AI SDK, no custom packages involved.

Available Models

ModelsLab's OpenAI-compatible endpoint gives you access to 200+ language models. Here are the standout options:

ModelModel IDBest For
Qwen 3.5 72BQwen/Qwen3.5-72B-InstructGeneral purpose, high quality
Qwen 3.5 32BQwen/Qwen3.5-32B-InstructBalanced quality and speed
Qwen 3.5 3BQwen/Qwen3.5-3B-InstructFast, lightweight tasks
Llama 4 Scoutmeta-llama/Llama-4-Scout-17B-16E-InstructInstruction following
Llama 4 Maverickmeta-llama/Llama-4-Maverick-17B-128E-InstructComplex reasoning
DeepSeek R1deepseek-ai/DeepSeek-R1Reasoning-heavy tasks
Mistral Largemistralai/Mistral-Large-2411Multilingual, enterprise use

Browse the full catalog at modelslab.com/models.

Why Use @ai-sdk/openai-compatible Instead of Building a Custom Provider?

The Vercel AI SDK supports building fully custom providers from scratch. So why use the OpenAI-compatible approach instead?

Custom Provider Approach

Building a custom provider means implementing the LanguageModelV1 interface yourself — handling request serialization, response parsing, streaming protocols, error mapping, and token counting. For ModelsLab's chat endpoint, you'd be reimplementing what the OpenAI-compatible layer already does, because the API format is identical.

A custom provider makes sense when your API has a non-standard format — proprietary request/response shapes, custom authentication flows, or unique streaming protocols.

OpenAI-Compatible Approach (What We're Doing)

Since ModelsLab's /api/v1 endpoint mirrors the OpenAI chat completions format exactly — same request body, same response structure, same SSE streaming format — you skip all the protocol-level work. The @ai-sdk/openai-compatible package handles it.

Advantages:

  • Zero maintenance — Vercel maintains the OpenAI-compatible layer. You get bug fixes and new features (structured outputs, tool calling) automatically.
  • 5 lines of config instead of 200+ lines of provider code.
  • Battle-tested streaming — The SDK's streaming implementation handles edge cases (backpressure, connection drops, partial chunks) that a hand-rolled provider would need to handle manually.
  • Full feature supportgenerateText(), streamText(), useChat(), onFinish callbacks, usage tracking — everything works out of the box.

When to Build a Custom Provider

You'd only need a custom provider for ModelsLab if you wanted to wrap a non-chat endpoint — like the image generation API (/api/v6/realtime/text2img) or the audio API. For text and chat? The compatible layer is the right call.

Multi-Provider Architecture

One of the AI SDK's strongest features is provider-agnostic code. You can swap between ModelsLab and other providers without changing your application logic:

typescript
import { generateText } from 'ai';
import { modelslab } from '@/lib/modelslab';
import { openai } from '@ai-sdk/openai';
import { anthropic } from '@ai-sdk/anthropic';
,[object Object],
typescript
const { text } = await generateText({
model: providers.modelslab, // swap this line to change providers
prompt: 'Explain quantum computing in simple terms.',
});

This makes A/B testing between providers, building fallback chains, or migrating between models straightforward.

Advanced: Adding Custom Headers and Request Transforms

The createOpenAICompatible function accepts additional configuration for advanced use cases:

typescript
import { createOpenAICompatible } from '@ai-sdk/openai-compatible';
typescript
const modelslab = createOpenAICompatible({
name: 'modelslab',
baseURL: 'https://modelslab.com/api/v1',
apiKey: process.env.MODELSLAB_API_KEY!,
headers: {
'X-Custom-Header': 'your-value',
},
});

FAQ

Can I use ModelsLab with the AI SDK without installing any extra provider package?

Yes. If you already have @ai-sdk/openai installed, you can use createOpenAI({ baseURL: 'https://modelslab.com/api/v1', apiKey: process.env.MODELSLAB_API_KEY }) to create a ModelsLab-pointed provider. No additional package needed beyond what the AI SDK already provides.

Does streaming work with ModelsLab through the AI SDK?

Yes. ModelsLab's API supports Server-Sent Events (SSE) streaming in the same format as OpenAI. Both streamText() and the useChat() hook work with real-time token streaming out of the box. No special configuration is required.

What models are available through ModelsLab's OpenAI-compatible endpoint?

ModelsLab offers 200+ language models including Qwen 3.5 (3B to 122B parameters), Llama 4 Scout and Maverick, DeepSeek R1, Mistral Large, and many more. You can browse all available models at modelslab.com/models.

How does ModelsLab's pricing compare to OpenAI?

ModelsLab uses pay-as-you-go pricing with no subscription required. Pricing varies by model, but open-source models like Qwen 3.5 and Llama 4 are significantly more cost-effective than proprietary alternatives. Check modelslab.com/pricing for current rates.

Can I use tool calling and structured outputs with ModelsLab through the AI SDK?

Tool calling and structured output support depend on the specific model you're using. Models that support the OpenAI function calling format will work with the AI SDK's tool calling features. Check the model documentation on ModelsLab for specific capability support.

Get Started

The entire integration takes under five minutes:

  1. Install ai and @ai-sdk/openai-compatible
  2. Create the provider with your ModelsLab API key
  3. Call generateText() or streamText() with any model

No custom packages to build. No wrappers to maintain. Just point, configure, and generate.

Get your free ModelsLab API key →

Share:
Plugins

Explore Plugins for Pro

Our plugins are designed to work with the most popular content creation software.

API

Build Apps with
ML
API

Use our API to build apps, generate AI art, create videos, and produce audio with ease.