Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content
Available now on ModelsLab · Language Model

DeepSeek: R1 Distill Llama 70BReason Deep. Distill Smart.

Distill R1 Power Efficiently.

Math Mastery

94.5% MATH-500 Score

Leads distilled models on MATH-500 and 86.7% AIME 2024 for advanced math reasoning.

Code Precision

57.5 LiveCodeBench

Outperforms o1-mini on GPQA Diamond and LiveCodeBench for reliable code generation.

Context Scale

131k Token Window

Handles long sequences with RoPE and Flash Attention on Llama-3.3-70B base.

Examples

See what DeepSeek: R1 Distill Llama 70B can create

Copy any prompt below and try it yourself in the playground.

Math Proof

Prove the infinitude of primes using contradiction. Provide step-by-step reasoning and formal notation.

Code Algorithm

Write Python code for Dijkstra's algorithm on a graph with 100 nodes. Include priority queue and edge weights.

Logic Puzzle

Solve this riddle: Five houses in a row, each with different color, owner nationality, drink, smoke, pet. Deduce pairings from clues.

Physics Derivation

Derive the Schrödinger equation from classical wave mechanics. Explain each quantum assumption step-by-step.

For Developers

A few lines of code.
Reasoning API. One Call.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about DeepSeek: R1 Distill Llama 70B

Read the docs

Ready to create?

Start generating with DeepSeek: R1 Distill Llama 70B on ModelsLab.