Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

OpenAI: GPT-4 Turbo (older v1106)Turbocharged GPT Intelligence

Unlock GPT-4 Turbo Power

Massive Context

128K Token Window

Process 300+ pages in one prompt with OpenAI: GPT-4 Turbo (older v1106) model.

Vision Enabled

Text Plus Images

Handle vision requests using JSON mode and function calling in OpenAI: GPT-4 Turbo (older v1106).

Cost Efficient

$10/30 Per Million

Run OpenAI: GPT-4 Turbo (older v1106) API at 3x cheaper input than GPT-4.

Examples

See what OpenAI: GPT-4 Turbo (older v1106) can create

Copy any prompt below and try it yourself in the playground.

Code Generator

Write a Python function to parse JSON from a 10,000-token API response, handle errors, and output structured data using type hints.

Document Summary

Summarize this 50,000-token legal contract: identify key clauses, risks, obligations, and suggest edits for clarity.

Math Solver

Solve this graduate-level math problem step-by-step: Compute the integral of ∫(x^2 + sin(x)) e^x dx from 0 to π, explain reasoning.

JSON Builder

Generate valid JSON schema for a user profile API with fields for name, email, preferences, and nested address object.

For Developers

A few lines of code.
GPT-4 Turbo. One API call.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about OpenAI: GPT-4 Turbo (older v1106)

Read the docs

Released November 6, 2023, it is OpenAI's GPT-4 Turbo preview with 128K context. Supports vision, JSON mode, function calling. Knowledge cutoff April 2023.

Call via OpenAI-compatible endpoints like OpenRouter. Use openai/gpt-4-1106-preview ID. Supports text input, 4096 max output tokens.

$10 per million input tokens, $30 per million output. Cheaper than original GPT-4. Available through providers like OpenAI.

Yes, processes text and images. Vision requests enable JSON mode and function calling. Ideal for image captioning or VQA.

Active model with 451ms best latency, 4.5 tok/s throughput. High benchmarks: 80.6% MMLU Pro, 66.6% GPQA Diamond.

128,000 tokens input. Max 4,096 output tokens. Handles long documents and conversations effectively.

Ready to create?

Start generating with OpenAI: GPT-4 Turbo (older v1106) on ModelsLab.