Create & Edit Images Instantly with Google Nano Banana 2

Try Nano Banana 2 Now
Skip to main content

How to Use ModelsLab Models in LobeChat (2026 Setup Guide)

Adhik JoshiAdhik Joshi
||5 min read|API
How to Use ModelsLab Models in LobeChat (2026 Setup Guide)

Integrate AI APIs Today

Build next-generation applications with ModelsLab's enterprise-grade AI APIs for image, video, audio, and chat generation

Get Started
Get Started

What Is LobeChat?

LobeChat is an open-source, self-hostable AI chat interface with 72,000+ GitHub stars — one of the fastest-growing projects in the LLM ecosystem. It supports dozens of AI providers through a plugin architecture, and this week, Patcher shipped PR #12560 adding ModelsLab as a first-class supported provider.

What that means practically: anyone running LobeChat can now connect their ModelsLab API key and use ModelsLab's 200+ models (Flux, SDXL, Wan 2.2, Qwen3.5, Llama 4, and more) directly in the chat interface — no custom plugin configuration required.

This guide shows you how to set it up in under 10 minutes, whether you're self-hosting LobeChat or running it locally.

Why LobeChat + ModelsLab?

Most AI chat interfaces are locked to a small set of providers — OpenAI, Anthropic, Google. LobeChat's architecture is different: it treats providers as plugins, which means community contributors can add support for any API-compatible backend.

The ModelsLab integration adds:

  • Text generation: Qwen3.5 (3B–122B), Llama 4 Scout/Maverick, DeepSeek R1, Mistral Large — all through ModelsLab's OpenAI-compatible endpoint
  • Image generation: Flux.1, SDXL, Stable Diffusion 3.5, and 150+ image models directly in chat
  • Multimodal workflows: Ask for a concept, generate an image, describe the result — all in one conversation thread
  • Cost control: Pay per token/image, no monthly subscription for the AI provider side

Setting Up LobeChat Locally (5 minutes)

If you don't already have LobeChat running, the quickest path is Docker:

docker run -d \
  -p 3210:3210 \
  -e MODELSLAB_API_KEY=your_key_here \
  --name lobechat \
  lobehub/lobe-chat:latest

Then open http://localhost:3210 in your browser.

Alternatively, use npx:

npx @lobehub/chat --port 3210

Connecting ModelsLab in LobeChat

Once LobeChat is running:

  1. Click the Settings gear icon (top right)
  2. Navigate to Language ModelModelsLab
  3. Enter your ModelsLab API key
  4. Click Check to verify the connection
  5. Enable the models you want to use from the list

Get your API key at modelslab.com/dashboard. Free tier includes credits to test before committing to a paid plan.

Using ModelsLab Models in Chat

After connecting, select a ModelsLab model from the model picker in any chat thread:

  • Qwen/Qwen3.5-72B-Instruct — Best general-purpose open-source LLM, matches GPT-4o on most tasks at a fraction of the cost
  • meta-llama/Llama-4-Scout-17B-16E-Instruct — Meta's latest, strong on instruction following and code
  • deepseek-ai/DeepSeek-R1 — Reasoning-focused, excellent for complex analysis and math
  • Qwen/Qwen3.5-122B-A10B-Instruct — Flagship 122B MoE model, frontier-quality reasoning

Self-Hosting LobeChat with ModelsLab: Docker Compose

For a production-ready self-hosted setup:

version: '3.8'
services:
  lobechat:
    image: lobehub/lobe-chat:latest
    ports:
      - "3210:3210"
    environment:
      # ModelsLab provider
      - MODELSLAB_API_KEY=${MODELSLAB_API_KEY}
      
      # Optional: pre-configure other providers
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      
      # Optional: set ModelsLab as default provider
      - DEFAULT_AGENT_CONFIG=provider=modelslab,model=Qwen/Qwen3.5-72B-Instruct
      
    restart: unless-stopped
    
  # Optional: add a database for persistent chat history
  postgres:
    image: postgres:15
    environment:
      - POSTGRES_PASSWORD=${DB_PASSWORD}
      - POSTGRES_DB=lobechat

This setup gives you a fully self-hosted AI chat interface backed by ModelsLab's model infrastructure — no data leaving your VPC except for the actual inference calls to the API.

Why This Matters for Developers

LobeChat's architecture makes it a practical choice for teams that want an internal AI assistant without building a chat UI from scratch. With the ModelsLab integration live, you get:

  • Model flexibility: Switch between Qwen3.5, Llama 4, DeepSeek R1, or any ModelsLab-supported model without changing your setup
  • Cost transparency: API-based pricing means you see exactly what each conversation costs — no opaque subscription tiers
  • Privacy: Self-host the frontend, route inference through ModelsLab's API — no conversation data in consumer AI products
  • Extensibility: LobeChat supports plugins, custom agents, and knowledge bases — ModelsLab provides the model backbone

The PR Behind This Integration

PR #12560, merged March 1, 2026, adds ModelsLab as a model provider to LobeChat following the same pattern established by AkashChat. The implementation touches 7 files: the ModelProvider enum, provider runtime configuration, type definitions, and unit tests.

For open-source contributors, this is a clean reference implementation for adding any OpenAI-compatible API as a LobeChat provider. The PR uses ModelsLab's /api/v1 endpoint, which mirrors the OpenAI chat completions format — meaning the same code path works for any OpenAI-compatible backend.

Accessing ModelsLab Models via API Directly

If you're building something custom on top of LobeChat, or want to call ModelsLab models directly without the chat interface:

from openai import OpenAI

# ModelsLab uses OpenAI-compatible API format
client = OpenAI(
    api_key="your_modelslab_api_key",
    base_url="https://modelslab.com/api/v1"
)

response = client.chat.completions.create(
    model="Qwen/Qwen3.5-72B-Instruct",
    messages=[
        {"role": "user", "content": "Explain the difference between async/await and callbacks in JavaScript"}
    ],
    max_tokens=1024
)

print(response.choices[0].message.content)

The same API key works for both LobeChat UI and direct API calls — no separate credentials to manage.

Get Started

Three steps to run LobeChat with ModelsLab:

  1. Get a ModelsLab API keymodelslab.com/dashboard (free tier available)
  2. Deploy LobeChatdocker run -p 3210:3210 -e MODELSLAB_API_KEY=your_key lobehub/lobe-chat
  3. Select a ModelsLab model in Settings → Language Model → ModelsLab

LobeChat source: github.com/lobehub/lobe-chat (72k stars). PR #12560 is live as of March 1, 2026.

For the full ModelsLab model catalog — including image generation, video, and audio models — see modelslab.com/models.

Access 200+ AI models through one API

ModelsLab powers LobeChat and your own applications — same API key, same endpoint. From Qwen3.5 to Flux image generation.

Get API Key →
Share:
Adhik Joshi

Written by

Adhik Joshi

Plugins

Explore Plugins for Pro

Our plugins are designed to work with the most popular content creation software.

API

Build Apps with
ML
API

Use our API to build apps, generate AI art, create videos, and produce audio with ease.