Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

LiquidAI: LFM2.5-1.2B-Thinking (free)Reason On-Device Free

Reason Smarter Faster

Chain-of-Thought

Generates Thinking Traces

Produces step-by-step reasoning before answers for math, logic, multi-step tasks.

32K Context

Handles Long Inputs

Sustains 46 tok/s at full 32K context for extended documents and workflows.

Agentic Power

Tool Use Optimized

Excels in planning tool calls, data extraction, RAG on LiquidAI LFM2.5-1.2B-Thinking free.

Examples

See what LiquidAI: LFM2.5-1.2B-Thinking (free) can create

Copy any prompt below and try it yourself in the playground.

Math Proof

Solve this equation step-by-step: prove that for all positive integers n, the sum of the first n odd numbers equals n squared. Show full reasoning chain.

Logic Puzzle

Three houses in a row, labeled A B C. A has red door, B blue, C green. Owners: Alice baker, Bob coder, Charlie engineer. Baker hates green. Coder in middle. Who lives where? Reason fully.

Code Debug

Debug this Python function that fails on large inputs: def factorial(n): if n == 0: return 1 else: return n * factorial(n-1). Identify recursion issue and fix with reasoning.

Data Analysis

Analyze sales data: Q1:100 units $10k, Q2:150 $15k, Q3:120 $12k. Predict Q4 trend, suggest actions. Use chain-of-thought for agentic planning.

For Developers

A few lines of code.
Reasoning API. One Call.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about LiquidAI: LFM2.5-1.2B-Thinking (free)

Read the docs

1.2B parameter model optimized for reasoning tasks like math and tool use. Generates thinking traces before answers. Runs under 1GB on-device.

Matches Qwen3-1.7B on benchmarks with 40% fewer parameters. Scores 88% on MATH-500, 57% BFCLv3 tool use. Efficient test-time compute.

Supports 32K tokens. Maintains ~46 tok/s throughput at max context. Ideal for long documents.

Agentic tasks, math reasoning, programming, data extraction. Use Instruct variant for chat instead.

Yes, TRL compatible for SFT, DPO, GRPO. Deploy via HF, GGUF, MLX, ONNX.

On-device, no cloud costs. Strong multilingual support. Privacy-focused for edge apps.

Ready to create?

Start generating with LiquidAI: LFM2.5-1.2B-Thinking (free) on ModelsLab.