---
title: Llama 4 Scout Instruct (17Bx16E) | Text Generation | ModelsLab
description: A 17 billion parameter multimodal LLM with 16 experts using MoE architecture, featuring a massive 10 million token context window for advanced text and.
url: https://modelslab.com/models/together_ai/meta-llama-llama-4-scout-17b-16e-instruct.md
canonical: https://modelslab.com/models/together_ai/meta-llama-llama-4-scout-17b-16e-instruct.md
type: product
component: Playground/LLM/Index
generated_at: 2026-05-05T21:49:48.039634Z
---

Llama 4 Scout Instruct (17Bx16E)
---

 [LLMs.txt](https://modelslab.com/models/meta/meta-llama-Llama-4-Scout-17B-16E-Instruct/llms.txt) [.md](https://modelslab.com/models/meta/meta-llama-Llama-4-Scout-17B-16E-Instruct.md)

meta-llama-Llama-4-Scout-17B-16E-Instruct meta Closed Source Model $0.385000 / call

Llama 4 Scout Instruct (17Bx16E)
---

Choose a prompt below to get started or type your own message

 Explain quantum computing in simple terms

 Write a Python function to sort a list

 Create a marketing email for a SaaS product

 Compare REST vs GraphQL APIs

Send

### Llama 4 Scout Instruct (17Bx16E)

meta meta-llama-Llama-4-Scout-17B-16E-Instruct

Copy model ID

PricingInput $0.18 / 1M tokens

Output $0.59 / 1M tokens

API EndpointsOpenAI Compatible

`https://modelslab.com/api/v7/llm/chat/completions`Endpoint

Anthropic Compatible

`https://modelslab.com/api/v7/llm/v1/messages`Messages

`https://modelslab.com/api/v7/llm/v1/messages/count_tokens`Count Tokens

`https://modelslab.com/api/v7/llm/v1/models`Models

Use with Claude Code

cURL Example

ParametersSystem MessageYou are a helpful AI assistant specialized in providing accurate and detailed responses.

Temperature0.7

Max Tokens1000

Top P0.9

Frequency Penalty0

Presence Penalty0

Model Info

Support

Related Models
---

Discover similar models you might be interested in

 [View all LLM Models](https://modelslab.com/models?feature=llmaster)

[MM](https://modelslab.com/models/open_router/mistralai-mistral-small-3.2-24b-instruct)[Open Router](https://modelslab.com/models/open_router)

 [Mistral: Mistral Small 3.2 24B

Closed Source Model](https://modelslab.com/models/open_router/mistralai-mistral-small-3.2-24b-instruct)

[RE](https://modelslab.com/models/open_router/rekaai-reka-edge)[Open Router](https://modelslab.com/models/open_router)

 [Reka Edge

Closed Source Model](https://modelslab.com/models/open_router/rekaai-reka-edge)

[OG](https://modelslab.com/models/open_router/openai-gpt-5.1-codex-max)[Open Router](https://modelslab.com/models/open_router)

 [OpenAI: GPT-5.1-Codex-Max

Closed Source Model](https://modelslab.com/models/open_router/openai-gpt-5.1-codex-max)

[IL](https://modelslab.com/models/open_router/inclusionai-ling-2.6-flash)[Open Router](https://modelslab.com/models/open_router)

 [inclusionAI: Ling-2.6-flash

Closed Source Model](https://modelslab.com/models/open_router/inclusionai-ling-2.6-flash)

[Q4](https://modelslab.com/models/together_ai/Qwen-Qwen3-4B-Base)[Together AI](https://modelslab.com/models/together_ai)

 [Qwen3 4B Base

Closed Source Model](https://modelslab.com/models/together_ai/Qwen-Qwen3-4B-Base)

[G5](https://modelslab.com/models/together_ai/zai-org-GLM-5.1)[Together AI](https://modelslab.com/models/together_ai)

 [GLM 5.1 FP4

Closed Source Model](https://modelslab.com/models/together_ai/zai-org-GLM-5.1)

[MM](https://modelslab.com/models/open_router/mistralai-mistral-nemo)[Open Router](https://modelslab.com/models/open_router)

 [Mistral: Mistral Nemo

Closed Source Model](https://modelslab.com/models/open_router/mistralai-mistral-nemo)

[AN](https://modelslab.com/models/open_router/amazon-nova-pro-v1)[Open Router](https://modelslab.com/models/open_router)

 [Amazon: Nova Pro 1.0

Closed Source Model](https://modelslab.com/models/open_router/amazon-nova-pro-v1)

[MC](https://modelslab.com/models/open_router/mistralai-codestral-2508)[Open Router](https://modelslab.com/models/open_router)

 [Mistral: Codestral 2508

Closed Source Model](https://modelslab.com/models/open_router/mistralai-codestral-2508)

[AA](https://modelslab.com/models/arcee_ai/arcee_ai-arcee-spotlight)[ModelsLab](https://modelslab.com/models/arcee_ai)

 [Arcee AI Spotlight

Closed Source Model](https://modelslab.com/models/arcee_ai/arcee_ai-arcee-spotlight)

[MK](https://modelslab.com/models/open_router/moonshotai-kimi-k2-thinking)[Open Router](https://modelslab.com/models/open_router)

 [MoonshotAI: Kimi K2 Thinking

Closed Source Model](https://modelslab.com/models/open_router/moonshotai-kimi-k2-thinking)

[Q1](https://modelslab.com/models/together_ai/Qwen-Qwen2.5-1.5B)[Together AI](https://modelslab.com/models/together_ai)

 [Qwen2.5 1.5B

Closed Source Model](https://modelslab.com/models/together_ai/Qwen-Qwen2.5-1.5B)

[QQ](https://modelslab.com/models/open_router/qwen-qwen3.6-plus)[Open Router](https://modelslab.com/models/open_router)

 [Qwen: Qwen3.6 Plus

Closed Source Model](https://modelslab.com/models/open_router/qwen-qwen3.6-plus)

[NH](https://modelslab.com/models/nous_research/NousResearch-Nous-Hermes-2-Mixtral-8x7B-DPO)[ModelsLab](https://modelslab.com/models/nous_research)

 [Nous Hermes 2 Mixtral 8X7B Dpo

Closed Source Model](https://modelslab.com/models/nous_research/NousResearch-Nous-Hermes-2-Mixtral-8x7B-DPO)

[Q2](https://modelslab.com/models/together_ai/Qwen-Qwen2-7B)[Together AI](https://modelslab.com/models/together_ai)

 [Qwen 2 (7B)

Closed Source Model](https://modelslab.com/models/together_ai/Qwen-Qwen2-7B)

[ML](https://modelslab.com/models/open_router/meta-llama-llama-3.2-11b-vision-instruct)[Open Router](https://modelslab.com/models/open_router)

 [Meta: Llama 3.2 11B Vision Instruct

Closed Source Model](https://modelslab.com/models/open_router/meta-llama-llama-3.2-11b-vision-instruct)

[Q9](https://modelslab.com/models/together_ai/Qwen-Qwen3.5-9B)[Together AI](https://modelslab.com/models/together_ai)

 [Qwen3.5 9B FP8

Closed Source Model](https://modelslab.com/models/together_ai/Qwen-Qwen3.5-9B)

[OO](https://modelslab.com/models/open_router/openai-o3-mini-high)[Open Router](https://modelslab.com/models/open_router)

 [OpenAI: o3 Mini High

Closed Source Model](https://modelslab.com/models/open_router/openai-o3-mini-high)

About Llama 4 Scout Instruct (17Bx16E)
---

A 17 billion parameter multimodal LLM with 16 experts using MoE architecture, featuring a massive 10 million token context window for advanced text and image reasoning.

### Technical Specifications

Model IDmeta-llama-Llama-4-Scout-17B-16E-InstructCategoryLLM ModelsTaskText GenerationPrice$0.385000 per million tokensAddedJuly 22, 2025

### Key Features

- Chat completion and multi-turn conversation API
- Streaming response with token-by-token output
- Function calling and tool use support
- System prompts and role-based messaging
- JSON mode and structured output

### Quick Start

Integrate Llama 4 Scout Instruct (17Bx16E) into your application with a single API call. Get your API key from the [pricing page](https://modelslab.com/pricing) to get started.

PythonJavaScriptcURLPHP

```
<code>import requests
import json

url = "https://modelslab.com/api/v7/llm/chat/completions"

headers = {
    "Content-Type": "application/json"
}

data = {
        "model_id": "meta-llama-Llama-4-Scout-17B-16E-Instruct",
        "messages": [
            {
                "role": "user",
                "content": "Hello!"
            }
        ],
        "max_tokens": 1000,
        "key": "YOUR_API_KEY"
    }

try:
    response = requests.post(url, headers=headers, json=data)
    response.raise_for_status()  # Raises an HTTPError for bad responses (4XX or 5XX)
    result = response.json()
    print("API Response:")
    print(json.dumps(result, indent=2))
except requests.exceptions.HTTPError as http_err:
    print(f"HTTP error occurred: {http_err} - {response.text}")
except Exception as err:
    print(f"Other error occurred: {err}")</code>
```

View the [full API documentation](https://modelslab.com/models/meta/meta-llama-Llama-4-Scout-17B-16E-Instruct/api) for SDKs, code examples in Python, JavaScript, and more.

### Pricing

Llama 4 Scout Instruct (17Bx16E) API costs $0.385000 per million tokens. Pay only for what you use with no minimum commitments. [View pricing plans](https://modelslab.com/pricing)

### Use Cases

- AI chatbots and virtual assistants
- Code generation and developer tools
- Content writing and copywriting automation
- Data analysis, summarization, and extraction

[Learn more about Llama 4 Scout Instruct (17Bx16E)](https://modelslab.com/llama-4-scout-instruct-17bx16e) [Browse LLM Models](https://modelslab.com/models?feature=llmaster) [More from Meta](https://modelslab.com/models/open_router) [View Pricing](https://modelslab.com/pricing)

Llama 4 Scout Instruct (17Bx16E) FAQ
---

### What is Llama 4 Scout Instruct (17Bx16E)?

A 17 billion parameter multimodal LLM with 16 experts using MoE architecture, featuring a massive 10 million token context window for advanced text and image reasoning.

### How do I use the Llama 4 Scout Instruct (17Bx16E) API?

You can integrate Llama 4 Scout Instruct (17Bx16E) into your application with a single API call. Sign up on ModelsLab to get your API key, then use the model ID "meta-llama-Llama-4-Scout-17B-16E-Instruct" in your API requests. We provide SDKs for Python, JavaScript, and cURL examples in the API documentation.

### How much does Llama 4 Scout Instruct (17Bx16E) cost?

Llama 4 Scout Instruct (17Bx16E) costs $0.385000 per million tokens. ModelsLab uses pay-per-use pricing with no minimum commitments. A free tier is available to get started.

### What is the Llama 4 Scout Instruct (17Bx16E) model ID?

The model ID for Llama 4 Scout Instruct (17Bx16E) is "meta-llama-Llama-4-Scout-17B-16E-Instruct". Use this ID in your API requests to specify this model.

### Does Llama 4 Scout Instruct (17Bx16E) have a free tier?

Yes, ModelsLab offers a free tier that lets you try Llama 4 Scout Instruct (17Bx16E) and other AI models. Sign up to get free API credits and start building immediately.

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-05-06*