Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

Upstage: Solar Pro 3Reason Deeper. Scale Effortlessly

Deploy Production Reasoning Now

MoE Architecture

102B Parameters Active 12B

Tripled scale over Solar Pro 2 delivers 30% reasoning gains without TPS or cost increase.

Instruction Following

52% IFBench Improvement

Excels in multi-step tasks and user intent capture for Korean and English queries.

Enterprise Ready

Same Latency Costs

Retains Solar Pro 2 efficiency for complex agent workflows and production stability.

Examples

See what Upstage: Solar Pro 3 can create

Copy any prompt below and try it yourself in the playground.

Logic Puzzle

Solve this step-by-step: Three houses in a row, labeled A B C. A has red door, B blue, C green. Owners: Smith likes cats, Jones dogs, Lee birds. Smith not in green. Dogs not next to birds. Who owns birds?

Math Proof

Prove by induction: Sum of first n even numbers equals n(n+1). Show base case, assume for k, prove for k+1. Explain each step clearly.

Code Debug

Fix this Python function that sorts list but fails on duplicates: def sort_unique(lst): return sorted(set(lst)). Add tests for [1,2,2,3] expecting [1,2,3]. Preserve order where possible.

Strategy Plan

Outline 5-step plan to launch AI agent for customer support. Include reasoning for each step, metrics for success, and edge cases like multilingual queries.

For Developers

A few lines of code.
Reasoning LLM. One Call.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about Upstage: Solar Pro 3

Read the docs

Upstage: Solar Pro 3 is a 102B MoE LLM with 12B active parameters. It improves reasoning by 30% and instruction following by 52% over Solar Pro 2. Same throughput and costs apply.

Upstage: Solar Pro 3 API matches Solar Pro 2 latency while tripling parameters. Use for production agents needing complex reasoning. Access via standard LLM endpoints.

Yes, scores 62.5 on Arena Hard v2 with strong multi-step logic. Optimized for real-world tasks like math and preferences. Supports reasoning tokens.

Serves as efficient alternative to dense LLMs with similar scale. Retains enterprise stability for AI agents. No cost hikes despite gains.

Built for Korean and English with deep context capture. Handles subtle instructions across domains. Enterprise-focused for reliable outputs.

Preserves Solar Pro 2 costs and TPS. Designed for predictable production use. Check endpoint for exact rates.

Ready to create?

Start generating with Upstage: Solar Pro 3 on ModelsLab.