Seedance 2.0 is here - create consistent, multimodal AI videos faster with images, videos, and audio in one prompt.

Try Now
Skip to main content
Available now on ModelsLab · Language Model

Nvidia Nemotron 3 Super 120B A12b Bf16Agentic Reasoning Supercharged

Scale Intelligence Efficiently

Hybrid Architecture

Mamba-Transformer MoE

Activates 12B of 120B parameters for 2.2x throughput over GPT-OSS-120B on B200 GPUs.

Long Context

1M Token Window

Handles extended sequences for multi-step planning and cross-document reasoning.

Optimized Precision

NVFP4 to Bf16

Pretrained in NVFP4, post-trained in Bf16 for 4x inference speed on Blackwell.

Examples

See what Nvidia Nemotron 3 Super 120B A12b Bf16 can create

Copy any prompt below and try it yourself in the playground.

Code Generation

Write a Python function to parse JSON logs, extract error rates, and generate a summary report with visualizations using matplotlib. Include error handling and support for large files.

Task Planning

Plan a multi-step cybersecurity triage workflow: analyze network logs, identify anomalies, prioritize threats, and recommend mitigation steps with tool calls.

Math Reasoning

Solve this AIME-level problem: Find the number of integer solutions to x^2 + y^2 + z^2 = 2025 where x, y, z are positive integers up to 50. Explain each step.

Agent Workflow

Design an autonomous agent script for software development: generate unit tests, run them via subprocess, fix failures iteratively, and output refactored code.

For Developers

A few lines of code.
Reasoning. One API call.

ModelsLab handles the infrastructure: fast inference, auto-scaling, and a developer-friendly API. No GPU management needed.

  • Serverless: scales to zero, scales to millions
  • Pay per token, no minimums
  • Python and JavaScript SDKs, plus REST API
import requests
response = requests.post(
"https://modelslab.com/api/v7/llm/chat/completions",
json={
"key": "YOUR_API_KEY",
"prompt": "",
"model_id": ""
}
)
print(response.json())

FAQ

Common questions about Nvidia Nemotron 3 Super 120B A12b Bf16

Read the docs

120B total, 12B active-parameter hybrid Mamba-Transformer MoE model. Pretrained on 25T tokens with NVFP4, post-trained in Bf16. Excels in agentic tasks like coding and planning.

Delivers 2.2x throughput vs GPT-OSS-120B and 7.5x vs Qwen3.5-122B on 8k input/64k output. Supports 1M context on B200 GPUs with vLLM/TRT-LLM.

Fully open under NVIDIA Open License with weights, datasets, and recipes. Customize for secure deployment from workstation to cloud.

Latent MoE calls 4x experts at one cost. Multi-token prediction speeds generation. Hybrid backbone cuts memory by 4x on Blackwell vs H100 FP8.

Outperforms GPT-OSS-120B in intelligence and throughput. Beats Qwen3.5-122B on efficiency despite similar intelligence. Leads open models on PinchBench at 85.6%.

Available via ModelsLab LLM endpoint. Use Bf16 weights for post-training accuracy. Integrates with NVIDIA NeMo for RL fine-tuning.

Ready to create?

Start generating with Nvidia Nemotron 3 Super 120B A12b Bf16 on ModelsLab.