Create & Edit Images Instantly with Google Nano Banana 2

Try Nano Banana 2 Now
Skip to main content

Chrome 147 Built-in AI API Breaking Change: What Developers Should Do Instead

Adhik JoshiAdhik Joshi
||7 min read|API
Chrome 147 Built-in AI API Breaking Change: What Developers Should Do Instead

Integrate AI APIs Today

Build next-generation applications with ModelsLab's enterprise-grade AI APIs for image, video, audio, and chat generation

Get Started
Get Started

Chrome 147 shipped a breaking change that caught a lot of developers off guard: provideContext() and clearContext() were removed from Chrome's Built-in AI APIs. If your web app was using these methods to manage AI session context in the browser, it stopped working.

This isn't the first time Google has changed the Built-in AI surface, and it won't be the last. The Chrome AI APIs are still in origin trial and developer preview — they're moving fast, and the API surface is unstable by design. That's fine for experimentation. It's a problem if you've built production features on top of it.

This post explains what changed, who's affected, and why most production AI use cases are better served by server-side APIs like ModelsLab.

What Changed in Chrome 147

Chrome's Built-in AI suite includes several experimental APIs: Prompt API, Summarization API, Translation API, and the Language Detection API — all running against Gemini Nano, which ships locally in Chrome. The APIs are gated behind origin trials and require users to have a compatible device.

In Chrome 147, Google removed two session context management methods from the Prompt API:

  • session.provideContext(text) — used to prime a session with background context before queries
  • session.clearContext() — used to reset the session context without destroying the session

If your code called either of these, it now throws a TypeError. The methods no longer exist on the session object.

The change affects the WebMCP API (Web Model Context Protocol), which some developers had started using to build browser-native AI features with persistent context. The intended workaround is to recreate sessions (ai.languageModel.create()) whenever you need fresh context.

Why This Keeps Happening

Chrome's Built-in AI APIs are genuinely experimental. Google is iterating fast, and the API surface changes between Chrome versions. Here's the track record:

  • Chrome 128: Prompt API introduced in origin trial
  • Chrome 131: Session cloning API changed
  • Chrome 138: Top-K sampling parameters renamed
  • Chrome 143: SystemPrompt API merged into ai.languageModel.create()
  • Chrome 147: provideContext() and clearContext() removed

This isn't unusual for origin trial APIs. The problem is that developers who built real features on top of these APIs — not just prototypes — are now playing a version-chasing game. Your app works in Chrome 146, breaks in Chrome 147, and you find out when users start reporting errors.

There's also the device availability issue. Chrome's Built-in AI requires Gemini Nano to be downloaded and loaded on the device. Not all users have compatible hardware. Not all users have Chrome. You're building for a subset of a subset.

The Practical Limits of In-Browser AI

Beyond the API instability, browser-based AI has real capability limits:

Model size

Gemini Nano is a small model optimized for on-device inference. It can handle basic text tasks — summarization, classification, simple Q&A — but it can't match the quality of GPT-4o, Claude 3.5, or Gemini Pro for complex reasoning, long documents, or nuanced generation tasks.

No image generation

Chrome's Built-in AI is text-only. If your app needs to generate images, render video, clone voices, or run a multimodal pipeline, you need a server-side API. Period.

No persistent state

With provideContext() gone, managing conversation history across sessions is even more manual. You need to pass the full message history in every call. Server-side APIs with proper session management handle this cleanly.

Rate limits and quotas

Gemini Nano has inference-time limits per session. Heavy use — long contexts, rapid queries — hits walls faster than you'd expect.

When In-Browser AI Actually Makes Sense

This isn't a case against Chrome's Built-in AI entirely. There are use cases where it works well:

  • Offline-capable apps — if you need AI that works without internet, Gemini Nano on-device is the only option in a browser
  • Privacy-sensitive text processing — grammar check, local summarization where you don't want data leaving the device
  • Latency-critical micro-tasks — single-shot classification or rewrite where the model size doesn't matter

For everything else — production AI features, image/video/audio generation, RAG pipelines, agent workflows — you want a server-side API you control.

How Server-Side APIs Handle This Differently

Server-side AI APIs don't break when Chrome ships a new version. They don't require the user to have a specific browser, device, or local model downloaded. And they give you access to the full model lineup — not just what fits in Gemini Nano.

ModelsLab's API, for example, exposes 200+ models across image generation, video generation, text generation, speech, and more. The API surface is versioned and stable — code you write today works tomorrow:

// Server-side: works in every browser, every device
const response = await fetch('https://modelslab.com/api/v6/realtime/text2img', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    key: process.env.MODELSLAB_API_KEY,
    prompt: 'A developer looking frustrated at their browser console',
    width: '512',
    height: '512',
    samples: '1',
    num_inference_steps: '20',
    safety_checker: 'no',
    enhance_prompt: 'yes',
  }),
});
const data = await response.json();
console.log(data.output[0]); // Image URL

Compare that to the equivalent Chrome Built-in AI code, which requires feature detection, capability checks, model download verification, and now different context management depending on which Chrome version the user is running.

Migrating Away from provideContext()

If you were using provideContext() specifically, here's how to adapt your code for Chrome 147+ while you evaluate the longer-term architecture:

Option 1: Pass context in the systemPrompt at session creation

// Before (Chrome 146 and earlier)
const session = await ai.languageModel.create();
await session.provideContext('You are a customer service bot for a SaaS product...');
const response = await session.prompt(userMessage);

// After (Chrome 147+)
const session = await ai.languageModel.create({
  systemPrompt: 'You are a customer service bot for a SaaS product...',
});
const response = await session.prompt(userMessage);

Option 2: Recreate the session when context changes

// Destroy and recreate when you need to change context
async function createSessionWithContext(context) {
  return await ai.languageModel.create({
    systemPrompt: context,
  });
}

// In your app
let session = await createSessionWithContext(initialContext);
// ... later, when context changes
session.destroy();
session = await createSessionWithContext(newContext);

This works but creates overhead — session creation downloads model weights each time on some devices. It's a regression from provideContext()'s lighter footprint.

The Architecture Decision

Chrome's Built-in AI is interesting for specific use cases, but it's not a production AI layer for most apps. The API instability, device requirements, and capability limits make it a prototype surface, not a foundation.

If you're building AI features that need to work reliably for all users, the architecture is straightforward:

  1. Your frontend sends requests to your backend
  2. Your backend calls a stable, versioned AI API (ModelsLab, OpenAI, Anthropic — pick your models)
  3. Your frontend renders the response

This pattern works across every browser, every device, every Chrome version. When Chrome 148 changes something, your users don't notice.

Getting Started with ModelsLab

ModelsLab provides API access to 200+ AI models across text, image, video, audio, and multimodal tasks. The API is REST-based, versioned, and doesn't require any browser-specific setup:

  • Text generation: GPT-4o, Claude 3.5, Llama 3.3, Mistral, Gemini Pro
  • Image generation: FLUX.1, Stable Diffusion 3.5, DALL-E 3, Fluxgram
  • Video generation: Kling 3.0, Seedance 2.0, Veo 3.1
  • Speech: Voice cloning, TTS, real-time speech APIs

Get your API key and start building at modelslab.com. The API works from any backend language — Node.js, Python, Go, Ruby, PHP. No browser dependencies, no version-chasing.

Summary

Chrome 147 removed provideContext() and clearContext() from the Built-in AI APIs. The immediate fix is to move your context into systemPrompt at session creation. The longer-term answer: for any AI feature that needs to be reliable in production, use a server-side API.

Browser AI is an interesting experiment. Server-side APIs are how you ship.

Share:
Adhik Joshi

Written by

Adhik Joshi

Plugins

Explore Plugins for Pro

Our plugins are designed to work with the most popular content creation software.

API

Build Apps with
ML
API

Use our API to build apps, generate AI art, create videos, and produce audio with ease.