# Stable Diffusion Trainer > Efficiently train custom Stable Diffusion models with flexible batch sizes, gradient checkpointing, and memory-optimized attention requiring 12-24 GB VRAM for high-quality 512×512 to 1024×1024 image outputs. ## Overview - **Model ID**: `sd-trainer` - **Category**: training - **Provider**: modelslab - **Status**: active - **Screenshot**: `https://assets.modelslab.com/generations/f352b8d0-1613-415c-9215-07cb5829ddcb.webp` ## API Information This model can be used via our HTTP API. See the API documentation and usage examples below. ### Endpoint - **URL**: `https://modelslab.com/api/v6/trainer/train` - **Method**: POST ### Parameters - **`instance_prompt`** (required): Prompt for training the instance. - Type: textarea - **`images`** (required): Dataset for training, either as a path string or a list of paths (Required) - Type: array - **`hf_token`** (optional): Hugging Face token for accessing models. - Type: text - **`hf_username`** (optional): Hugging Face username for model storage - Type: text - **`base_model_type`** (optional): Type of the base model. (Required) - Type: text - **`seed`** (optional): Random seed for reproducibility. (Required) - Type: number - **`training_steps`** (optional): - Type: number - **`resolution`** (optional): Resolution for the training images. - Type: number (range: 512-1024) - **`batch_size`** (optional): Batch size for training. - Type: number (range: 1-128) - **`mixed_precision`** (optional): Precision mode for training allowed (fp16, bf16”) - Type: select (options: fp16) - **`rank`** (optional): Rank for LoRA - Type: number (range: 1-512) - **`alpha`** (optional): Alpha for LoRA. - Type: number (range: 1-1024) - **`learning_rate`** (optional): - Type: number - **`server_name`** (optional): - Type: select (options: NVIDIA GeForce RTX 3090 ($0.46/hr)) - **`trainer_id`** (optional): - Type: text ## Usage Examples ### cURL ```bash curl --request POST \ --url https://modelslab.com/api/v6/trainer/train \ --header "Content-Type: application/json" \ --data '{ "key": "YOUR_API_KEY", "model_id": "sd-trainer", "instance_prompt": "thalman than", "base_model_type": "sdxl", "seed": "298329", "training_steps": "1500", "resolution": "1024", "batch_size": "4", "mixed_precision": "fp16", "rank": "4", "alpha": "2", "learning_rate": "0.0001", "server_name": "NVIDIA GeForce RTX 3090 ($0.46/hr)", "trainer_id": "sd-trainer" }' ``` ### Python ```python import requests response = requests.post( "https://modelslab.com/api/v6/trainer/train", headers={ "Content-Type": "application/json" }, json={ "key": "YOUR_API_KEY", "model_id": "sd-trainer", "instance_prompt": "thalman than", "base_model_type": "sdxl", "seed": "298329", "training_steps": "1500", "resolution": "1024", "batch_size": "4", "mixed_precision": "fp16", "rank": "4", "alpha": "2", "learning_rate": "0.0001", "server_name": "NVIDIA GeForce RTX 3090 ($0.46/hr)", "trainer_id": "sd-trainer" } ) print(response.json()) ``` ### JavaScript ```javascript fetch("https://modelslab.com/api/v6/trainer/train", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ "key": "YOUR_API_KEY", "model_id": "sd-trainer", "instance_prompt": "thalman than", "base_model_type": "sdxl", "seed": "298329", "training_steps": "1500", "resolution": "1024", "batch_size": "4", "mixed_precision": "fp16", "rank": "4", "alpha": "2", "learning_rate": "0.0001", "server_name": "NVIDIA GeForce RTX 3090 ($0.46/hr)", "trainer_id": "sd-trainer" }) }) .then(response => response.json()) .then(data => console.log(data)); ``` ## Links - [Model Playground](https://modelslab.com/models/sd-trainer/sd-trainer) - [API Documentation](https://docs.modelslab.com) - [ModelsLab Platform](https://modelslab.com)