Happy Horse 1.0 is now on ModelsLab

Try Now
Skip to main content
Available now on ModelsLab · Language Model

Facebook CWMCode Worlds LLM

Run CWM Efficiently

Reasoning Built-In

Enable Thinking Tags

Inject <think> tags for step-by-step code reasoning in responses.

32B Parameters

Dense Decoder LLM

Deploy facebook cwm model for advanced code and chat tasks via API.

vLLM Ready

Serve Parallel

Use tensor-parallel-size for fast inference on facebook cwm api.

Examples

See what Facebook CWM can create

Copy any prompt below and try it yourself in the playground.

Recursion Haiku

Write a haiku about recursion in programming. Use precise code concepts and poetic structure.

SWE-bench Solve

Solve this SWE-bench task: Fix bug in Python repo handling async file I/O. Provide diff and explanation.

LiveCodeBench

Generate Python solution for LiveCodeBench problem on dynamic programming with memoization. Include tests.

MATH Proof

Prove this AIME-level math theorem step-by-step using logical reasoning and LaTeX notation.

For Developers

A few lines of code.
CWM inference. Two commands.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about Facebook CWM

Read the docs

Facebook CWM is a 32-billion-parameter dense decoder-only LLM for code generation. Developed by Meta FAIR. Available on Hugging Face.

Serve with vLLM using facebook/cwm model. Query via OpenAI-compatible /v1/chat/completions endpoint. Enable reasoning with chat_template_kwargs.

Uses CWM License. Request access on Hugging Face for weights. Includes pre-trained, SFT, and post-trained versions.

ModelsLab provides facebook cwm api with pay-per-use and free tier. Direct access without Hugging Face approval delays.

Yes, enables <think> tags by default in vLLM. Disable via extra_body reasoning enabled false. Use specific system prompt.

Hugging Face repos: facebook/cwm, cwm-sft, cwm-pretrain. PyTorch checkpoints from Meta AI resources.

Ready to create?

Start generating with Facebook CWM on ModelsLab.