---
title: Trinity Mini — Efficient Reasoning LLM | ModelsLab
description: Run Arcee AI: Trinity Mini for 128K context reasoning and tool calling. Try the Arcee AI: Trinity Mini API for agent workflows now.
url: https://modelslab.com/arcee-ai-trinity-mini
canonical: https://modelslab.com/arcee-ai-trinity-mini
type: website
component: Seo/ModelPage
generated_at: 2026-04-15T02:22:42.930507Z
---

Available now on ModelsLab · Language Model

Arcee AI: Trinity Mini
Efficient MoE Reasoning
---

[Try Arcee AI: Trinity Mini](/models/open_router/arcee-ai-trinity-mini) [API Documentation](https://docs.modelslab.com)

Run Agents Seamlessly
---

Sparse MoE

### 3B Active Params

26B model activates 3B per token from 128 experts for low-latency inference.

Long Context

### 131K Token Window

Handles extended inputs with strong utilization for grounded multi-turn responses.

Tool Calling

### Reliable Function Use

Delivers schema-true JSON and agent recovery in Arcee AI: Trinity Mini API.

Examples

See what Arcee AI: Trinity Mini can create
---

Copy any prompt below and try it yourself in the [playground](/models/open_router/arcee-ai-trinity-mini).

Code Review

“Review this Python function for bugs and suggest optimizations: def fibonacci(n): if n <= 1: return n else: return fibonacci(n-1) + fibonacci(n-2)”

JSON Schema

“Generate valid JSON for user profile schema with name, email, age over 18, and preferences array.”

Agent Workflow

“Plan multi-step task: fetch weather API for NYC, compare to Tokyo, output summary in table format.”

Document Summary

“Summarize key points from this 10K token RAG document on quantum computing advancements.”

For Developers

A few lines of code.
Reasoning. Few lines.
---

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

- **Serverless:** scales to zero, scales to millions
- **Pay per token,** no minimums
- **Python and JavaScript SDKs,** plus REST API

[API Documentation ](https://docs.modelslab.com)

PythonJavaScriptcURL

Copy

```
<code>import requests

response = requests.post(
    "https://modelslab.com/api/v7/llm/chat/completions",
    json={
  "key": "YOUR_API_KEY",
  "prompt": "",
  "model_id": ""
}
)
print(response.json())</code>
```

FAQ

Common questions about Arcee AI: Trinity Mini
---

[Read the docs ](https://docs.modelslab.com)

### What is Arcee AI: Trinity Mini?

### How does arcee ai trinity mini work?

### What is Arcee AI: Trinity Mini API context length?

### Is Arcee AI: Trinity Mini model open-weight?

### Arcee AI: Trinity Mini alternative to what?

### Where use arcee ai trinity mini api?

Ready to create?
---

Start generating with Arcee AI: Trinity Mini on ModelsLab.

[Try Arcee AI: Trinity Mini](/models/open_router/arcee-ai-trinity-mini) [API Documentation](https://docs.modelslab.com)

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-04-15*