---
title: Llama 3.2 90B Vision Turbo — Multimodal LLM | ModelsLab
description: Process text and images with Meta Llama 3.2 90B Vision Instruct Turbo for visual reasoning and captioning. Try the API now.
url: https://modelslab.com/meta-llama-32-90b-vision-instruct-turbo
canonical: https://modelslab.com/meta-llama-32-90b-vision-instruct-turbo
type: website
component: Seo/ModelPage
generated_at: 2026-04-15T00:32:37.985418Z
---

Available now on ModelsLab · Language Model

Meta Llama 3.2 90B Vision Instruct Turbo
Vision Meets Reasoning
---

[Try Meta Llama 3.2 90B Vision Instruct Turbo](/models/meta/meta-llama-Llama-3.2-90B-Vision-Instruct-Turbo) [API Documentation](https://docs.modelslab.com)

Process Text and Images
---

Multimodal Input

### Handle Images Plus Text

Accept text and images as input to generate text outputs via 90B parameters.

128K Context

### Extended Token Window

Support 128,000 tokens for complex visual reasoning and long conversations.

Visual Reasoning

### Analyze Charts Graphs

Extract insights from charts, graphs, and documents with image reasoning.

Examples

See what Meta Llama 3.2 90B Vision Instruct Turbo can create
---

Copy any prompt below and try it yourself in the [playground](/models/meta/meta-llama-Llama-3.2-90B-Vision-Instruct-Turbo).

Chart Analysis

“Analyze this sales chart image. Identify the month with highest revenue and explain the trend.”

Document QA

“Examine this invoice image. Extract total amount, date, and vendor details in structured JSON.”

Image Caption

“Provide detailed caption for this architectural blueprint image, noting key structures and measurements.”

Graph Reasoning

“Review this line graph image. Summarize growth patterns and predict next quarter based on data.”

For Developers

A few lines of code.
Vision inference. One call.
---

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

- **Serverless:** scales to zero, scales to millions
- **Pay per token,** no minimums
- **Python and JavaScript SDKs,** plus REST API

[API Documentation ](https://docs.modelslab.com)

PythonJavaScriptcURL

Copy

```
<code>import requests

response = requests.post(
    "https://modelslab.com/api/v7/llm/chat/completions",
    json={
  "key": "YOUR_API_KEY",
  "prompt": "",
  "model_id": ""
}
)
print(response.json())</code>
```

FAQ

Common questions about Meta Llama 3.2 90B Vision Instruct Turbo
---

[Read the docs ](https://docs.modelslab.com)

### What is Meta Llama 3.2 90B Vision Instruct Turbo?

### How to use Meta Llama 3.2 90B Vision Instruct Turbo API?

### What tasks does Meta Llama 3.2 90B Vision Instruct Turbo model handle?

### Is Meta Llama 3.2 90B Vision Instruct Turbo alternative to Claude?

### What inputs does meta llama 3.2 90b vision instruct turbo api accept?

### Where to access meta llama 3.2 90b vision instruct turbo model?

Ready to create?
---

Start generating with Meta Llama 3.2 90B Vision Instruct Turbo on ModelsLab.

[Try Meta Llama 3.2 90B Vision Instruct Turbo](/models/meta/meta-llama-Llama-3.2-90B-Vision-Instruct-Turbo) [API Documentation](https://docs.modelslab.com)

---

*This markdown version is optimized for AI agents and LLMs.*

**Links:**
- [Website](https://modelslab.com)
- [API Documentation](https://docs.modelslab.com)
- [Blog](https://modelslab.com/blog)

---
*Generated by ModelsLab - 2026-04-15*