Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content
Available now on ModelsLab · Language Model

Facebook CWMCode Worlds LLM

Run CWM Efficiently

Reasoning Built-In

Enable Thinking Tags

Inject <think> tags for step-by-step code reasoning in responses.

32B Parameters

Dense Decoder LLM

Deploy facebook cwm model for advanced code and chat tasks via API.

vLLM Ready

Serve Parallel

Use tensor-parallel-size for fast inference on facebook cwm api.

Examples

See what Facebook CWM can create

Copy any prompt below and try it yourself in the playground.

Recursion Haiku

Write a haiku about recursion in programming. Use precise code concepts and poetic structure.

SWE-bench Solve

Solve this SWE-bench task: Fix bug in Python repo handling async file I/O. Provide diff and explanation.

LiveCodeBench

Generate Python solution for LiveCodeBench problem on dynamic programming with memoization. Include tests.

MATH Proof

Prove this AIME-level math theorem step-by-step using logical reasoning and LaTeX notation.

For Developers

A few lines of code.
CWM inference. Two commands.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about Facebook CWM

Read the docs

Ready to create?

Start generating with Facebook CWM on ModelsLab.