🎉 New Year Sale: Get 20% OFF on all plans — Use code NEWYEAR2026.

Upgrade now
Flux Lora Trainer thumbnail

ModelsLab/Flux Lora Trainer

flux_lora_trainer
Fast-train your custom models with optimized pipelines, supporting various image formats, and requiring minimal 16GB VRAM for efficient fine-tuning.
API PlaygroundAPI Documentation

API Endpoint URL

Base URL for all API requests to this endpoint.

https://modelslab.com/api/v6/trainer/train

API Authentication

Authentication requires a valid API key included in the request. Generate and manage your API keys from your developer dashboard. Include the key in the key parameter for all API requests.

Integration Examples

Production-ready code samples for API integration

{
"images": [],
"trigger_word": "Pink",
"hf_username": null,
"hf_token": null,
"training_steps": "1000",
"server_name": "NVIDIA H100 80GB HBM3",
"resolution": "1024",
"trainer_id": "flux_lora_trainer",
"key": "YOUR_API_KEY"
}

SDKs

Official SDKs

Production-ready SDKs and client libraries for all major programming languages

API Parameters

Technical specifications for API request parameters.

Field NameParameterTechnical Description
Init ImageimagesUpload or paste the URL to training images
Tirgger Wordtrigger_wordTrigger word for the LoRA model
Hf Usernamehf_usernameusername for huggingface (Required)
Hf Tokenhf_tokenhf token for the user (Required)
Training Stepstraining_stepsRecommended training steps are based on your dataset size. Training Steps = 100 to 150 × number of images. Example 10 images × 150 = 1500 steps. General Rule of Thumb: Fewer images = more steps per image. More images = fewer steps per image
Deploy Serverserver_nameSelect server to train your lora model
resolutionresolutionVideo resolution
Trainer IDtrainer_idNo description available