Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content

DeepSeek: R1 Distill Llama 70B

deepseek-deepseek-r1-distill-llama-70bdeepseekClosed Source Model$0.750000 / call

DeepSeek: R1 Distill Llama 70B

Choose a prompt below to get started or type your own message

About DeepSeek: R1 Distill Llama 70B

DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across...

Technical Specifications

Model ID
deepseek-deepseek-r1-distill-llama-70b
Category
LLM Models
Task
Text Generation
Price
$0.750000 per million tokens
Added
February 20, 2026

Key Features

  • Chat completion and multi-turn conversation API
  • Streaming response with token-by-token output
  • Function calling and tool use support
  • System prompts and role-based messaging
  • JSON mode and structured output

Quick Start

Integrate DeepSeek: R1 Distill Llama 70B into your application with a single API call. Get your API key from the pricing page to get started.

import requests
import json
url = "https://modelslab.com/api/v7/llm/chat/completions"
headers = {
"Content-Type": "application/json"
}
data = {
"model_id": "deepseek-deepseek-r1-distill-llama-70b",
"messages": [
{
"role": "user",
"content": "Hello!"
}
],
"max_tokens": 1000,
"key": "YOUR_API_KEY"
}
try:
response = requests.post(url, headers=headers, json=data)
response.raise_for_status() # Raises an HTTPError for bad responses (4XX or 5XX)
result = response.json()
print("API Response:")
print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
print(f"Other error occurred: {err}")

View the full API documentation for SDKs, code examples in Python, JavaScript, and more.

Pricing

DeepSeek: R1 Distill Llama 70B API costs $0.750000 per million tokens. Pay only for what you use with no minimum commitments. View pricing plans

Use Cases

  • AI chatbots and virtual assistants
  • Code generation and developer tools
  • Content writing and copywriting automation
  • Data analysis, summarization, and extraction

DeepSeek: R1 Distill Llama 70B FAQ

DeepSeek R1 Distill Llama 70B is a distilled large language model based on [Llama-3.3-70B-Instruct](/meta-llama/llama-3.3-70b-instruct), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). The model combines advanced distillation techniques to achieve high performance across...

You can integrate DeepSeek: R1 Distill Llama 70B into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "deepseek-deepseek-r1-distill-llama-70b" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

DeepSeek: R1 Distill Llama 70B costs $0.750000 per million tokens. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

The model ID for DeepSeek: R1 Distill Llama 70B is "deepseek-deepseek-r1-distill-llama-70b". Use this ID in your API requests to specify this model.

Yes, ModelsLab offers a free tier that lets you try DeepSeek: R1 Distill Llama 70B and other AI models. Sign up to get free API credits and start building immediately.