LiquidAI: LFM2.5-1.2B-Thinking (free)
Reason On-Device Free
Reason Smarter Faster
Chain-of-Thought
Generates Thinking Traces
Produces step-by-step reasoning before answers for math, logic, multi-step tasks.
32K Context
Handles Long Inputs
Sustains 46 tok/s at full 32K context for extended documents and workflows.
Agentic Power
Tool Use Optimized
Excels in planning tool calls, data extraction, RAG on LiquidAI LFM2.5-1.2B-Thinking free.
Examples
See what LiquidAI: LFM2.5-1.2B-Thinking (free) can create
Copy any prompt below and try it yourself in the playground.
Math Proof
“Solve this equation step-by-step: prove that for all positive integers n, the sum of the first n odd numbers equals n squared. Show full reasoning chain.”
Logic Puzzle
“Three houses in a row, labeled A B C. A has red door, B blue, C green. Owners: Alice baker, Bob coder, Charlie engineer. Baker hates green. Coder in middle. Who lives where? Reason fully.”
Code Debug
“Debug this Python function that fails on large inputs: def factorial(n): if n == 0: return 1 else: return n * factorial(n-1). Identify recursion issue and fix with reasoning.”
Data Analysis
“Analyze sales data: Q1:100 units $10k, Q2:150 $15k, Q3:120 $12k. Predict Q4 trend, suggest actions. Use chain-of-thought for agentic planning.”
For Developers
A few lines of code.
Reasoning API. One Call.
ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.
- Serverless: scales to zero, scales to millions
- Pay per token, no minimums
- Python and JavaScript SDKs, plus REST API
import requestsresponse = requests.post("https://modelslab.com/api/v7/llm/chat/completions",json={"key": "YOUR_API_KEY","prompt": "","model_id": ""})print(response.json())
Ready to create?
Start generating with LiquidAI: LFM2.5-1.2B-Thinking (free) on ModelsLab.