Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

Relace: Relace SearchSearch Codebases Agentically

Search Faster Than RAG

Agentic Reasoning

Multi-Step Code Exploration

Deploys 4-12 view_file and grep tools in parallel to find relevant files.

Ultra-Fast

4x Frontier Speed

Performs precise multi-step reasoning over codebases 4x faster than frontier models.

Subagent Ready

Hands Off to Oracle

Passes findings to oracle coding agent for task completion.

Examples

See what Relace: Relace Search can create

Copy any prompt below and try it yourself in the playground.

API Route Finder

Search the codebase for all API routes handling user authentication, including middleware and controllers. List relevant files with line numbers and brief summaries of their roles.

Database Schema

Locate database schema definitions and migrations related to user profiles. Return file paths, key models, and any associated queries.

Error Handler

Find error handling logic for payment processing failures across the repo. Include functions, try-catch blocks, and logging statements.

Config Loader

Identify configuration loading modules for environment variables and secrets management. Provide file locations and initialization code snippets.

For Developers

A few lines of code.
Search repos. Agentic calls.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about Relace: Relace Search

Read the docs

Relace: Relace Search is an LLM that uses 4-12 view_file and grep tools in parallel to explore codebases. It returns relevant files via agentic multi-step reasoning. Designed as a subagent for coding tasks.

Relace: Relace Search uses agentic reasoning for precise results 4x faster than frontier models. RAG lacks this multi-step exploration. It outperforms on codebase retrieval.

Input costs $1.00 per 1M tokens. Output costs $3.00 per 1M tokens. Context window is 256k tokens with 128k max output.

Yes, build an agent harness to parse responses. It passes findings to an oracle coding agent. Requires tool integration for view_file and grep.

Relace: Relace Search excels in speed and precision over standard RAG or frontier LLMs. No direct equivalent matches its agentic codebase search. Pair with Relace Apply for full workflows.

Available via OpenRouter as relace/relace-search. Use Relace provider endpoints. Sign up at relace.ai for playground and API keys.

Ready to create?

Start generating with Relace: Relace Search on ModelsLab.