Fluxgram V1.0 is a FLUX-based model on ModelsLab built specifically for photorealistic character generation. If you've been generating characters with Stable Diffusion models and hitting realism ceilings — faces that don't quite land, skin textures that look plastic, eyes that feel off — Fluxgram addresses those specific weaknesses in the base FLUX architecture.
This guide covers the API parameters, prompt patterns that work, and how to integrate Fluxgram V1.0 into your existing image generation pipeline via the ModelsLab API.
What Makes Fluxgram Different from Base FLUX
Base FLUX (Black Forest Labs) is strong on composition and overall image quality. Fluxgram V1.0 fine-tunes that base specifically for:
- Facial realism — skin texture, pore detail, and lighting response on faces
- Character consistency — the same character described across multiple generations stays more visually consistent
- Portrait framing — the model understands portrait compositions by default without needing explicit composition prompts
- Natural skin tones — avoids the over-smoothed, filtered aesthetic common in character-focused models
The tradeoff: Fluxgram is optimized for human characters. For landscapes, abstract art, or product photography, base FLUX models will serve you better.
API Integration
Fluxgram V1.0 is available via the ModelsLab API. Here's the basic request structure:
import requests
API_KEY = "your_modelslab_api_key"
response = requests.post(
"https://modelslab.com/api/v6/images/text2img",
headers={"Content-Type": "application/json"},
json={
"key": API_KEY,
"model_id": "fluxgram",
"prompt": "professional headshot, 35mm portrait, natural lighting, sharp focus",
"negative_prompt": "blurry, plastic skin, oversmoothed, digital art",
"width": 1024,
"height": 1024,
"samples": 1,
"num_inference_steps": 30,
"guidance_scale": 7.5,
"seed": None
}
)
result = response.json()
print(result["output"])
Key Parameters for Fluxgram
model_id
Set model_id: "fluxgram" to use Fluxgram V1.0. This routes your request to the fine-tuned model instead of base FLUX.
num_inference_steps
30–35 steps gives the best quality-to-speed ratio for Fluxgram. Below 20 steps, facial detail degrades noticeably. Above 40, diminishing returns.
guidance_scale
7.0–8.0 is the sweet spot. Lower values (5–6) produce softer, more stylized outputs. Higher values (9+) can make faces look HDR-processed or over-sharpened.
width and height
1024x1024 for portrait crops. 768x1024 for full-body shots. 1024x768 for landscape-framed character scenes. Fluxgram handles non-square ratios without major quality loss.
Prompt Patterns That Work
Portrait with Lighting Specification
"portrait of a woman, 35mm film, soft studio lighting, shallow depth of field,
natural makeup, sharp eyes, professional headshot"
Environmental Character
"man in his 40s, outdoor environment, golden hour, environmental portrait,
candid expression, worn leather jacket, photojournalism style"
High-Fashion Editorial
"editorial fashion portrait, studio white background, dramatic directional lighting,
sharp focus, medium format photography aesthetic, high contrast"
Documentary Style
"documentary style portrait, natural window light, candid expression,
35mm grain, muted colors, authentic emotion"
Negative Prompts for Fluxgram
These negative prompts consistently improve output quality:
"blurry, out of focus, plastic skin, oversmoothed, overprocessed, digital art,
illustration, painting, cartoon, anime, text, watermark, multiple people"
The oversmoothed and plastic skin negatives are particularly important for Fluxgram — they prevent the model from defaulting to the hyper-clean aesthetic common in lower-quality character models.
Batch Generation and Seed Management
For character consistency across multiple images, use a fixed seed value:
import requests
API_KEY = "your_modelslab_api_key"
SEED = 42 # Fix this for consistent character
base_payload = {
"key": API_KEY,
"model_id": "fluxgram",
"negative_prompt": "blurry, plastic skin, oversmoothed, digital art",
"width": 1024,
"height": 1024,
"samples": 1,
"num_inference_steps": 30,
"guidance_scale": 7.5,
"seed": SEED
}
# Generate same character in different scenarios
prompts = [
"portrait of Emma, 35mm, natural light, casual",
"portrait of Emma, 35mm, studio light, professional",
"portrait of Emma, 35mm, outdoor, golden hour"
]
for prompt in prompts:
payload = {**base_payload, "prompt": prompt}
response = requests.post(
"https://modelslab.com/api/v6/images/text2img",
headers={"Content-Type": "application/json"},
json=payload
)
print(response.json()["output"])
Note: Seed-based consistency in FLUX models is softer than in Stable Diffusion. The same seed + same prompt will produce very similar results, but swapping backgrounds or lighting setups will still cause minor facial variation. This is expected behavior.
Performance and Pricing
Fluxgram V1.0 runs on the same infrastructure as other ModelsLab FLUX models. Typical generation times:
- 30 inference steps at 1024x1024: 4–8 seconds
- Queue times vary with load; expect 1–3 second queue on off-peak hours
Pricing uses the same credit structure as other ModelsLab API endpoints. Check your ModelsLab dashboard for current credit rates.
Common Issues
Output looks over-processed
Add overprocessed, HDR, digital art to your negative prompt. Lower guidance_scale to 6.5–7.0.
Face details are blurry
Increase inference steps to 35. Add sharp focus, high detail, 8k to the positive prompt. Ensure you're generating at 1024x1024 minimum.
Character doesn't match across prompts
Fix your seed value and keep the character name consistent in the prompt (use a specific first name). Describe distinctive features explicitly: "woman with short dark hair, freckles, brown eyes" rather than generic descriptions.
Get Started
Fluxgram V1.0 is available now via ModelsLab API. If you don't have API access, sign up for a free ModelsLab account to get your API key and start generating with the model today.