Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

OpenAI: Gpt-oss-20bOpenAI gpt-oss-20b MoE

Deploy Efficient Reasoning Models

MoE Architecture

21B Total 3.6B Active

Activates 3.6B parameters per token from 21B total for low-latency inference on single GPU.

Reasoning Levels

Low Medium High Effort

Set reasoning effort in system prompt to balance speed and performance on complex tasks.

Agentic Tools

Function Calling Support

Handles tool use, structured outputs, and chain-of-thought for STEM and coding.

Examples

See what OpenAI: Gpt-oss-20b can create

Copy any prompt below and try it yourself in the playground.

Math Proof

Prove Fermat's Last Theorem step-by-step using high reasoning effort. Explain each mathematical concept clearly for advanced audience.

Code Debugger

Debug this Python function for sorting algorithms: def quicksort(arr): ... Identify errors and provide fixed version with medium reasoning.

Physics Simulation

Simulate quantum entanglement experiment. Describe setup, equations, and outcomes using low reasoning effort for quick overview.

Algorithm Design

Design efficient graph traversal algorithm for social network analysis. Include pseudocode and time complexity analysis with high effort.

For Developers

A few lines of code.
Reasoning. One API call.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about OpenAI: Gpt-oss-20b

Read the docs

OpenAI: gpt-oss-20b is a 21B parameter MoE LLM with 3.6B active parameters. It supports reasoning, tool use, and runs on 16GB hardware. Released under Apache 2.0.

Access via LLM endpoint with text input up to 128K tokens. Set reasoning effort low, medium, or high in system prompt. Outputs structured text responses.

Matches o3-mini benchmarks in reasoning and coding. Optimized for local deployment unlike proprietary APIs. Supports fine-tuning.

Runs on consumer GPUs with 16-32GB VRAM. Uses BF16 weights and MXFP4 for MoE efficiency. Ideal for edge devices.

Strong in STEM, coding, math, and agentic workflows. Handles chain-of-thought over 20K tokens. Text-only, no multimodal input.

Yes, supports fine-tuning for domain tasks. Uses Harmony format with function calling. Deploy locally or via API.

Ready to create?

Start generating with OpenAI: Gpt-oss-20b on ModelsLab.