Create & Edit Images Instantly with Google Nano Banana 2

Try Nano Banana 2 Now
Skip to main content

How to Add ModelsLab Image Generation to gpt-researcher (PR #1647)

Adhik JoshiAdhik Joshi
||3 min read|API
How to Add ModelsLab Image Generation to gpt-researcher (PR #1647)

Integrate AI APIs Today

Build next-generation applications with ModelsLab's enterprise-grade AI APIs for image, video, audio, and chat generation

Get Started
Get Started

Autonomous research agents just got a visual upgrade. As of today, gpt-researcher — the open-source autonomous research agent with 25,000+ GitHub stars — now supports ModelsLab as a native image generation provider through PR #1647.

This means your research reports can now include AI-generated images, diagrams, and visual content — automatically, without leaving your research pipeline.

What Is gpt-researcher?

gpt-researcher is an autonomous agent that plans research tasks, searches the web, aggregates findings, and produces comprehensive reports — all without manual intervention. Developers use it to build research tools, competitive intelligence systems, and automated content pipelines.

The library already supported Google Gemini for image generation. With this PR, ModelsLab joins as the second official image generation provider — and brings a lot more model variety to the table.

What ModelsLab Adds

ModelsLab's image generation API supports 200+ models including Stable Diffusion XL, FLUX.1, SDXL-Turbo, ControlNet variants, and fine-tuned community models. For research reports, this means you can generate:

  • Illustrative diagrams for technical concepts
  • Visualizations of abstract ideas described in the report
  • Featured images for report summaries
  • Custom imagery based on research conclusions

The implementation is backward-compatible. Existing gpt-researcher setups are unaffected — image generation is opt-in via environment variable.

How to Configure It

Set up is a single environment variable change:

# In your .env file or environment
IMAGE_GENERATION_PROVIDER=modelslab
MODELSLAB_API_KEY=your_api_key_here

Get your API key at modelslab.com/api-key (free tier includes 100 requests/month).

Full Setup: gpt-researcher + ModelsLab

Here's a minimal working example that generates a research report with images:

import asyncio
import os
from gpt_researcher import GPTResearcher

# Configure environment
os.environ["IMAGE_GENERATION_PROVIDER"] = "modelslab"
os.environ["MODELSLAB_API_KEY"] = "your-api-key"
os.environ["OPENAI_API_KEY"] = "your-openai-key"  # or any supported LLM

async def run_research():
    # Define your research query
    query = "Latest breakthroughs in transformer architecture efficiency 2026"
    
    researcher = GPTResearcher(
        query=query,
        report_type="research_report"
    )
    
    # Run the full research pipeline
    await researcher.conduct_research()
    
    # Generate report with images
    report = await researcher.write_report()
    print(report)

asyncio.run(run_research())

gpt-researcher will automatically call ModelsLab's image API to generate relevant visuals as it constructs the report.

Choosing the Right Image Model

By default, ModelsLab uses its standard text-to-image endpoint. For research reports, you'll want clean, informational imagery. A few options worth configuring:

  • FLUX.1-schnell — fast generation, good for abstract concepts
  • SDXL-Turbo — good quality/speed tradeoff for high-volume reports
  • Stable Diffusion XL — highest quality for single featured images

Check the ModelsLab image generation docs for model-specific endpoints and parameters.

Why This Matters for Developer Workflows

Most research pipelines treat text and images as separate concerns. You write the report, then manually source visuals elsewhere. gpt-researcher + ModelsLab collapses this into one pipeline — the agent researches, writes, and illustrates autonomously.

For teams building research automation tools, this removes a manual step that previously required either stock photos (generic) or separate image generation pipelines (complex).

What's Next

The PR is open and actively maintained. If you're building on top of gpt-researcher, this is the integration to watch. ModelsLab's API supports custom model selection, negative prompts, style controls, and batch generation — all of which could be surfaced in future gpt-researcher configurations.

To get started: grab a free API key, install gpt-researcher via pip, and set the two environment variables above. Your research reports will have visuals in the next run.

Resources

Share:
Adhik Joshi

Written by

Adhik Joshi

Plugins

Explore Plugins for Pro

Our plugins are designed to work with the most popular content creation software.

API

Build Apps with
ML
API

Use our API to build apps, generate AI art, create videos, and produce audio with ease.