Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

StepFun: Step 3.5 FlashFlash Reasoning 196B MoE

Reason Deep. Run Fast.

MoE Efficiency

11B Active Params

Activates 11B of 196B params per token via sparse MoE for top reasoning at 11B speed.

Blazing Speed

100-300 Tok/s

3-way Multi-Token Prediction delivers 100-300 tok/s, peaking at 350 tok/s for coding.

Long Context

256K Window

Hybrid Sliding Window Attention handles 256K context with low compute overhead.

Examples

See what StepFun: Step 3.5 Flash can create

Copy any prompt below and try it yourself in the playground.

Math Proof

Solve this AIME-level math problem step-by-step: Prove that for integers n > 1, the sum of divisors function σ(n) satisfies certain bounds. Use chain-of-thought reasoning and verify with code execution if needed.

Code Agent

Write a Python function to parse a large codebase, identify bugs in async handlers, and suggest fixes. Output the refactored code with explanations.

Logic Chain

Analyze this complex logic puzzle involving 10 agents with constraints. Deduce the solution through multi-step reasoning, listing assumptions and eliminations.

Data Summary

Summarize key insights from a 200K token dataset on AI benchmarks, highlighting trends in MoE vs dense models, with quantitative comparisons.

For Developers

A few lines of code.
Agentic inference. Few lines.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about StepFun: Step 3.5 Flash

Read the docs

StepFun: Step 3.5 Flash is an open-source 196B MoE LLM activating 11B params per token. It excels in agentic reasoning, coding, and math at 100-300 tok/s. Supports 256K context via hybrid attention.

Achieves 100-300 tok/s typical throughput with 3-way MTP-3. Peaks at 350 tok/s for coding on Hopper GPUs. Enables real-time multi-step reasoning.

StepFun: Step 3.5 Flash API powers fast agentic workflows, deep math, and long-context tasks. Ideal for low-VRAM inference on unified-memory hardware. Rivals proprietary models in benchmarks.

196B total params, 11B active, 45-layer transformer, 256K context, 128K vocab. Uses 288 fine-grained experts per layer, top-8 routed. FP8 quantized.

Yes, StepFun: Step 3.5 Flash alternative matches GPT/Claude/Gemini in math (AIME 99.8%) and agents (ARC-AGI 56.5%). Open-source with superior efficiency. Deploy via API for production.

MoE design gives 196B intelligence at 11B speed/latency. Outperforms on agentic tasks while using less VRAM. Handles long contexts cost-efficiently.

Ready to create?

Start generating with StepFun: Step 3.5 Flash on ModelsLab.