Create & Edit Images Instantly with Google Nano Banana 2

Try Nano Banana 2 Now
Skip to main content

3 AI Portfolio Projects That Get Developers Hired in 2026

Adhik JoshiAdhik Joshi
||9 min read|API
3 AI Portfolio Projects That Get Developers Hired in 2026

Integrate AI APIs Today

Build next-generation applications with ModelsLab's enterprise-grade AI APIs for image, video, audio, and chat generation

Get Started
Get Started

Hiring managers at companies building AI products see hundreds of portfolios. Most have the same three things: a chatbot, a sentiment classifier, and a resume that says "familiar with LLMs."

The portfolios that stand out do something different. They use production-grade AI APIs to solve real problems. They show working code. They demonstrate that the candidate has shipped something that touches real infrastructure — not just wrapped ChatGPT in a Flask app.

This tutorial walks through three projects you can build this weekend using ModelsLab's API suite. Each project solves a genuine business problem, uses real generative AI (image, video, audio), and gives you something demonstrable when a hiring manager asks "show me something you built."

All code runs against the ModelsLab API. You'll need a free account and API key.

What makes an AI portfolio project stand out

Before the projects, a quick framework. Hiring managers in AI-adjacent roles (ML engineer, AI product engineer, full-stack with AI) are looking for three signals:

  • Problem clarity: Did you pick a real use case or a toy? "Image classifier for cats vs dogs" is a toy. "Product image generator for e-commerce sellers who can't afford a photographer" is a use case.
  • API depth: Did you just call openai.chat.completions.create() with a prompt? Or did you handle rate limits, retries, streaming, error states, and model fallbacks?
  • Demo-ability: Can you show it working in 60 seconds during a video call? If yes, you're ahead of 90% of portfolio projects.

Each project below is designed to score high on all three.

Project 1: AI Product Image Generator for E-Commerce

The use case

Small e-commerce sellers — Etsy shops, Shopify stores, Amazon FBA sellers — need high-quality product photos. A professional product shoot costs anywhere from $50 to $500 per image. Many sellers photograph products on their kitchen counter with a phone.

Your project: a CLI tool that takes a product description and generates professional studio-style product images using the ModelsLab Image Generation API. The seller inputs what their product is; the tool outputs images they can use on their listing.

The code

import requests
import base64
import sys
from pathlib import Path
API_KEY = "your_modelslab_api_key"
API_URL = "https://modelslab.com/api/v6/images/text2img"
def generate_product_image(product_description: str, output_dir: str = "./output") -> list[str]:
    """
    Generate studio-quality product images from a description.
    Returns list of saved image paths.
    """
    prompt = (
        f"Professional product photography of {product_description}, "
        "studio lighting, white background, 4K, sharp focus, "
        "commercial photography style, isolated product"
    )
    payload = {
        "key": API_KEY,
        "prompt": prompt,
        "negative_prompt": "blurry, low quality, distorted, watermark, text, person, hands",
        "width": "1024",
        "height": "1024",
        "samples": "4",
        "num_inference_steps": "30",
        "guidance_scale": 7.5,
        "safety_checker": "yes",
        "enhance_prompt": "yes",
    }
    response = requests.post(API_URL, json=payload, timeout=120)
    response.raise_for_status()
    data = response.json()
    if data.get("status") == "error":
        raise RuntimeError(f"API error: {data.get('message', 'Unknown error')}")
    # Handle both immediate and queued responses
    if data.get("status") == "processing":
        fetch_url = data.get("fetch_result")
        if fetch_url:
            data = poll_for_result(fetch_url)
    output_path = Path(output_dir)
    output_path.mkdir(exist_ok=True)
    saved_paths = []
    for i, img_url in enumerate(data.get("output", [])):
        img_response = requests.get(img_url, timeout=30)
        img_response.raise_for_status()
        path = output_path / f"product_{i+1}.png"
        path.write_bytes(img_response.content)
        saved_paths.append(str(path))
        print(f"Saved: {path}")
    return saved_paths
def poll_for_result(fetch_url: str, max_attempts: int = 20) -> dict:
    """Poll for async result with exponential backoff."""
    import time
    for attempt in range(max_attempts):
        response = requests.post(fetch_url, json={"key": API_KEY}, timeout=30)
        data = response.json()
        if data.get("status") == "success":
            return data
        if data.get("status") == "error":
            raise RuntimeError(f"Processing failed: {data.get('message')}")
        wait = min(2 ** attempt, 30)
        print(f"Processing... retry in {wait}s (attempt {attempt+1}/{max_attempts})")
        time.sleep(wait)
    raise TimeoutError("Result not ready after max attempts")
if __name__ == "__main__":
    product = " ".join(sys.argv[1:]) or "ceramic coffee mug with geometric pattern"
    print(f"Generating images for: {product}")
    paths = generate_product_image(product)
    print(f"\nGenerated {len(paths)} images in ./output/")

What to add to your README

Don't just say "generates product images." Frame the business problem:

Saves e-commerce sellers $50–$500 per SKU in professional photography costs. Generates 4 studio-quality variations from a text description in under 60 seconds. Built for sellers on Etsy, Shopify, and Amazon Marketplace who need product photos at scale.

Then add a before/after screenshot: your tool's input (a text description) and output (the image). This is the 60-second demo that works on a video call.

Extensions to make it stand out further

  • Add a simple Gradio or Streamlit UI so non-technical users can try it
  • Add batch mode: read a CSV of product descriptions, generate images for all of them
  • Add a background removal step (ModelsLab has a background removal endpoint)

Project 2: Marketing Video Brief-to-Clip Tool

The use case

Marketing teams at startups produce a constant stream of short-form video content for LinkedIn, Instagram Reels, and YouTube Shorts. They have ideas — a product launch, a feature announcement, a founder story — but no video budget and no video editor.

Your project: a tool that takes a 2-3 sentence marketing brief and generates a short video clip. The marketer writes "Show our AI transcription tool making a podcast interview searchable" — the tool outputs a video.

The code

import requests
import time
from dataclasses import dataclass
API_KEY = "your_modelslab_api_key"
TEXT2VIDEO_URL = "https://modelslab.com/api/v6/video/text2video"
@dataclass
class VideoConfig:
    width: int = 512
    height: int = 512
    num_frames: int = 16
    fps: int = 8
    num_inference_steps: int = 25
    guidance_scale: float = 7.5
def brief_to_video(brief: str, config: VideoConfig = None) -> str:
    """
    Convert a marketing brief into a video clip.
    Returns the URL of the generated video.
    """
    if config is None:
        config = VideoConfig()
    # Enhance the brief into a video-optimized prompt
    prompt = (
        f"{brief}, "
        "cinematic quality, smooth motion, professional lighting, "
        "4K, high definition, seamless loop"
    )
    payload = {
        "key": API_KEY,
        "prompt": prompt,
        "negative_prompt": "shaky camera, blurry, low quality, flickering, watermark",
        "width": str(config.width),
        "height": str(config.height),
        "num_frames": str(config.num_frames),
        "fps": str(config.fps),
        "num_inference_steps": str(config.num_inference_steps),
        "guidance_scale": config.guidance_scale,
    }
    print(f"Submitting brief to video API...")
    response = requests.post(TEXT2VIDEO_URL, json=payload, timeout=60)
    response.raise_for_status()
    data = response.json()
    if data.get("status") == "error":
        raise RuntimeError(f"API error: {data.get('message')}")
    # Video generation is async — poll for result
    fetch_url = data.get("fetch_result") or data.get("output", [None])[0]
    if data.get("status") == "processing" and fetch_url:
        print("Processing... this takes 30-90 seconds for video")
        data = poll_video_result(fetch_url)
    outputs = data.get("output", [])
    if not outputs:
        raise RuntimeError("No video output returned")
    video_url = outputs[0]
    print(f"\nVideo ready: {video_url}")
    return video_url
def poll_video_result(fetch_url: str, timeout_seconds: int = 180) -> dict:
    """Poll for video generation result."""
    deadline = time.time() + timeout_seconds
    attempt = 0
    while time.time() < deadline:
        response = requests.post(fetch_url, json={"key": API_KEY}, timeout=30)
        data = response.json()
        if data.get("status") == "success":
            return data
        if data.get("status") == "error":
            raise RuntimeError(f"Video generation failed: {data.get('message')}")
        attempt += 1
        wait = min(attempt * 5, 30)
        print(f"Still processing... ({int(time.time() % 60)}s elapsed)")
        time.sleep(wait)
    raise TimeoutError(f"Video generation timed out after {timeout_seconds}s")
# Example usage
if __name__ == "__main__":
    brief = "A developer typing code, a terminal showing an AI transcription output, clean and professional"
    video_url = brief_to_video(brief)
    # Download the video
    import urllib.request
    urllib.request.urlretrieve(video_url, "marketing_clip.mp4")
    print("Saved: marketing_clip.mp4")

Why this lands in interviews

Video generation is still genuinely impressive to watch in real time. Most hiring managers have played with image generators; fewer have seen text-to-video in action. Run this during a video interview and you have the room's attention for the next 90 seconds while the video processes.

Frame it as: "I built this for marketing teams at companies under 20 people who have no video budget. Here's a 10-second clip I generated from a one-sentence brief."

Project 3: AI Voice Narration App for Content Creators

The use case

Content creators who publish long-form articles, newsletters, or documentation often want an audio version. Narrating their own content takes hours. Hiring a voice actor costs per-minute rates that don't scale. Podcast-style audio versions of written content increase engagement and reach a different audience segment.

Your project: an app that takes a text article and converts it to a narrated audio file — clean, natural speech that sounds like a podcast host reading it.

import requests
import json
from pathlib import Path
API_KEY = "your_modelslab_api_key"
TTS_URL = "https://modelslab.com/api/v6/voice/text_to_audio"
def narrate_article(
    text: str,
    voice: str = "en-US-JennyNeural",
    output_file: str = "narration.wav"
) -> str:
    """
    Convert article text to narrated audio.
    Returns path to saved audio file.
    """
    # Split long text into chunks (API handles reasonable lengths)
    chunks = split_text_for_narration(text, max_chars=800)
    audio_segments = []
    for i, chunk in enumerate(chunks):
        print(f"Narrating chunk {i+1}/{len(chunks)}...")
        payload = {
            "key": API_KEY,
            "prompt": chunk,
            "voice": voice,
            "output_format": "wav",
        }
        response = requests.post(TTS_URL, json=payload, timeout=60)
        response.raise_for_status()
        data = response.json()
        if data.get("status") == "error":
            raise RuntimeError(f"TTS error on chunk {i+1}: {data.get('message')}")
        audio_url = data.get("output") or (data.get("output", [None])[0] if isinstance(data.get("output"), list) else None)
        if audio_url:
            audio_response = requests.get(audio_url, timeout=30)
            audio_response.raise_for_status()
            audio_segments.append(audio_response.content)
    # Concatenate segments and save
    output_path = Path(output_file)
    with open(output_path, "wb") as f:
        for segment in audio_segments:
            f.write(segment)
    print(f"Narration saved: {output_path} ({output_path.stat().st_size // 1024} KB)")
    return str(output_path)
def split_text_for_narration(text: str, max_chars: int = 800) -> list[str]:
    """Split text at sentence boundaries to stay within API limits."""
    import re
    sentences = re.split(r'(?<=[.!?])\s+', text)
    chunks = []
    current_chunk = []
    current_len = 0
    for sentence in sentences:
        if current_len + len(sentence) > max_chars and current_chunk:
            chunks.append(" ".join(current_chunk))
            current_chunk = [sentence]
            current_len = len(sentence)
        else:
            current_chunk.append(sentence)
            current_len += len(sentence)
    if current_chunk:
        chunks.append(" ".join(current_chunk))
    return chunks
if __name__ == "__main__":
    # Read from a text file or use a sample
    sample_article = """
    The developer job market in 2026 is bifurcating. On one side, engineers who've
    integrated AI tools into their daily workflow are more productive than ever. On the
    other, engineers who haven't adapted are finding their skills increasingly mismatched
    with what companies need. This isn't a future prediction — it's showing up in
    hiring data right now.
    """
    narrate_article(sample_article.strip(), output_file="article_narration.wav")

Portfolio angle for this project

The narration project has two strong portfolio angles:

  1. Developer tool angle: "Built an automated narration pipeline for a newsletter that converts weekly posts to audio for Substack's audio feature." Concrete. Shows you understand content workflows.
  2. API integration depth angle: The chunking and stitching logic shows you understand that real APIs have limits and you know how to handle them gracefully. This is the kind of code that separates a hobbyist project from something you'd actually deploy.

How to present these projects

GitHub structure that works

Every repo should have:

  • README with a GIF or screenshot — put this at the top. Hiring managers spend 20 seconds on a GitHub repo. Make those 20 seconds count.
  • A live demo link — even a basic Streamlit app deployed on Streamlit Cloud (free tier) is enough. "Click here to try it" beats "clone the repo and run locally" every time.
  • The problem statement in the first paragraph — not "this project uses the ModelsLab API." Instead: "For e-commerce sellers who can't afford professional photography."

What to say in the interview

Walk through your project in exactly this order:

  1. "The problem I was solving..." (10 seconds)
  2. "Here's the demo" (run it live — 90 seconds)
  3. "Here's one interesting technical challenge I hit..." (60 seconds — pick the async polling, the chunking logic, or the retry handling)
  4. "Here's what I'd improve if I had more time..." (30 seconds — shows product thinking)

Three minutes. You've shown a real problem, working code, technical depth, and the ability to think about future improvements. That's the complete picture a hiring manager needs.

Getting started with the ModelsLab API

All three projects use the ModelsLab API, which provides image generation, video generation, audio/TTS, and more through a single platform. API keys are available at modelslab.com.

The platform supports both synchronous responses for fast models and asynchronous polling for heavier tasks like video generation — which is why the code above includes polling logic. That pattern (submit → poll → retrieve) is worth understanding independently; it's how most production AI infrastructure works.

Full API reference and model documentation: docs.modelslab.com

Share:
Adhik Joshi

Written by

Adhik Joshi

Plugins

Explore Plugins for Pro

Our plugins are designed to work with the most popular content creation software.

API

Build Apps with
ML
API

Use our API to build apps, generate AI art, create videos, and produce audio with ease.