# Hyperfusion v5 >
This LoRA model was trained on 200k images of normal to hyper sized anime characters. It focus mainly on breasts/ass/belly/thighs but is trained on over 47,000 unique tags.
In short, this model is a "fusion" of all of my other models. Hence fusion in the name. However if you really want the absolute largest sizes, it's best to use this model AND one of my specific hyper models. I've found that specialized models do work really well when they focus on one specific topic. I just wanted to have it all in one, so this is where we are.
Doubled the dataset size to 200k+
Trained with Kohya's LoCon LoRA, but does not require any additional extensions. <lora:model_name:1> is all you need
There is a bug in A1111 version 1.0 and 1.1 that prevents Kohya's LoCon LoRA from loading. If you are on this version, you can either upgrade to 1.2.x or install the LoRA extension.
v4 focused on body size, this one focuses more on body shapes. See tag docs for examples. ex :round belly, chubby belly, sagging belly, hanging belly, breasts..., body... etc...
Better at belly related tags overall (because a good part of the new data was belly stuff)
Better tag accuracy for sizes and shapes, thanks to more accurate image classifiers. Still plenty of room for improvement
v5 is trained a bit further than v4 so you may need to reduce the strength for some prompts. It's usually fine at :1 for anime type models, but should be lowered when mixing LoRAs
Because hyperfusion is a conglomeration of multiple tagging schemes, I've included a tag guide "tag-data" in the training data download. It will describe the way the tags work (similar to Danbooru tags) and which tags the model knows best.
But for the most part you can use a majority of tags from Danbooru, r-43, e621, related to breasts/ass/belly/thighs/nipples.
The best method I have found for tag exploration is going to danbooru/r-34 and copying the tags from any image you like, and use them as a base. Because there are just too many tags trained into this model to test them all.
If you are not getting the results you expect from a tag, find other similar tags and include those as well. I've found that this model tends to spread its knowledge of a tag around to other related tags. So including more will increase your chances of getting what you want.
Using the negative "3d" does a good job of making the image more anime like if it starts veering too much into a rendered model look.
AbyssOrange does have a habit of putting nipples on bellies, try to avoid nipple tags when generating big bellies. use "navel" tags instead.
Rarely happens any more. Must have been a data issue
Ass related tags have a strong preference for back shots, try a low strength ControlNet pose to correct this, or try one or more of these in the negatives "ass focus, from behind, looking back". The new "ass visible from front" tag can help too.
This model took me months of failures and plenty of lessons learned (hence v4)! If LoCon LoRA picks up in popularity, I may retrain it in the near future. I would eventually like to train a few more image classifiers to improve certain tags, but all future dreams for now.
As usual I have no intention of monetizing any of my models. Enjoy the thickness!
-Tagging-
The key to tagging a 100k dataset is to automate it all. Start with the wd-tagger (or similar booru tagger) to append some common tags on top of the tags scraped with the images from their source site. Then I trained a handful of image classifiers like breast size, ass size, hyper/not hyper..., and let those do the heavy lifting. Finally convert any similar tags into one single tag as described in the tag docs.
-Poor Results-
For a long time I was plagued with sub par results. I suspected maybe the data was just too low quality, but in the end it just ended up being poorly tagged images. Sites like r-34 tend to have too many tags describing an image like "big breasts, huge breasts, hyper breasts" all on the same image. This is not great for a model where you want to specify specific sizes. Using the classifiers I mentioned above I limited each image to a single size tag for each body part, and the results were night and day.
-Testing-
In order to determine if a new model is better than the last, it's important to have some standard prompts that you can compare with. x/y plot is great for this. Just keep in mind that the seeds between models will be totally different, and you likely need to compare dozens of images at a time and not 1 to 1. It's also important to compare new models against the base model output to make sure what you are training is actually having an overall positive effect compared to the origin model.
-Software/Hardware-
The training was all done on a 3090 in an Ubuntu docker instance. The software was Kohya's trainer using the LoRA network and lots of patience.
## Overview - **Model ID**: `hyperfusion-v5` - **Category**: stable diffusion xl - **Provider**: modelslab - **Status**: model_ready - **Screenshot**: `https://assets.modelslab.com/generations/781506b9-eb54-47db-ace2-a5b7b7547118-0.png` ## API Information This model can be used via our HTTP API. See the API documentation and usage examples below. ### Endpoint - **URL**: `https://modelslab.com/api/v6/images/text2img` - **Method**: POST ### Parameters - **`prompt`** (required): prompt help in image generation - Type: textarea - Example: Enter prompt - **`model_id`** (required): Enter model_id that can help in image generation - Type: text - Example: Enter model_id here - **`lora_model`** (required): - Type: multiple_models - **`width`** (required): width of the image - Type: number (range: 512-1024) - **`height`** (required): height of the image - Type: number (range: 512-1024) - **`negative_prompt`** (optional): Negative prompt help in avoid things that you do not want in image - Type: textarea - Example: Enter negative prompt that you do not want see in image - **`scheduler`** (optional): - Type: select (options: DPM++ 2M, DPM++ SDE, Euler, Euler a) - **`guidance_scale`** (optional): - Type: number (range: 1-10) ## Usage Examples ### cURL ```bash curl --request POST \ --url https://modelslab.com/api/v6/images/text2img \ --header "Content-Type: application/json" \ --data '{ "key": "YOUR_API_KEY", "model_id": "hyperfusion-v5", "prompt": "R3alisticF, hauntingly beautiful oriental necromancer, long flowing brown hair, bangs, darkly tanned skin, earrings, bone necklaces, dark eyeshadow, red lips, vibrant, front-laced transparent, filmy silk blouse, cleavage, holding skull, in a sandstone room lit by candles, High Detail, Perfect Composition, high contrast, silhouetted, chiascuro", "width": "1024", "height": "1024", "negative_prompt": "(worst quality:2), (low quality:2), (normal quality:2), (jpeg artifacts), (blurry), (duplicate), (morbid), (mutilated), (out of frame), (extra limbs), (bad anatomy), (disfigured), (deformed), (cross-eye), (glitch), (oversaturated), (overexposed), (underexposed), (bad proportions), (bad hands), (bad feet), (cloned face), (long neck), (missing arms), (missing legs), (extra fingers), (fused fingers), (poorly drawn hands), (poorly drawn face), (mutation), (deformed eyes), watermark, text, logo, signature, grainy, tiling, censored, nsfw, ugly, blurry eyes, noisy image, bad lighting, unnatural skin, asymmetry", "scheduler": "DPMSolverMultistepScheduler", "guidance_scale": "7.5" }' ``` ### Python ```python import requests response = requests.post( "https://modelslab.com/api/v6/images/text2img", headers={ "Content-Type": "application/json" }, json={ "key": "YOUR_API_KEY", "model_id": "hyperfusion-v5", "prompt": "R3alisticF, hauntingly beautiful oriental necromancer, long flowing brown hair, bangs, darkly tanned skin, earrings, bone necklaces, dark eyeshadow, red lips, vibrant, front-laced transparent, filmy silk blouse, cleavage, holding skull, in a sandstone room lit by candles, High Detail, Perfect Composition, high contrast, silhouetted, chiascuro", "width": "1024", "height": "1024", "negative_prompt": "(worst quality:2), (low quality:2), (normal quality:2), (jpeg artifacts), (blurry), (duplicate), (morbid), (mutilated), (out of frame), (extra limbs), (bad anatomy), (disfigured), (deformed), (cross-eye), (glitch), (oversaturated), (overexposed), (underexposed), (bad proportions), (bad hands), (bad feet), (cloned face), (long neck), (missing arms), (missing legs), (extra fingers), (fused fingers), (poorly drawn hands), (poorly drawn face), (mutation), (deformed eyes), watermark, text, logo, signature, grainy, tiling, censored, nsfw, ugly, blurry eyes, noisy image, bad lighting, unnatural skin, asymmetry", "scheduler": "DPMSolverMultistepScheduler", "guidance_scale": "7.5" } ) print(response.json()) ``` ### JavaScript ```javascript fetch("https://modelslab.com/api/v6/images/text2img", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ "key": "YOUR_API_KEY", "model_id": "hyperfusion-v5", "prompt": "R3alisticF, hauntingly beautiful oriental necromancer, long flowing brown hair, bangs, darkly tanned skin, earrings, bone necklaces, dark eyeshadow, red lips, vibrant, front-laced transparent, filmy silk blouse, cleavage, holding skull, in a sandstone room lit by candles, High Detail, Perfect Composition, high contrast, silhouetted, chiascuro", "width": "1024", "height": "1024", "negative_prompt": "(worst quality:2), (low quality:2), (normal quality:2), (jpeg artifacts), (blurry), (duplicate), (morbid), (mutilated), (out of frame), (extra limbs), (bad anatomy), (disfigured), (deformed), (cross-eye), (glitch), (oversaturated), (overexposed), (underexposed), (bad proportions), (bad hands), (bad feet), (cloned face), (long neck), (missing arms), (missing legs), (extra fingers), (fused fingers), (poorly drawn hands), (poorly drawn face), (mutation), (deformed eyes), watermark, text, logo, signature, grainy, tiling, censored, nsfw, ugly, blurry eyes, noisy image, bad lighting, unnatural skin, asymmetry", "scheduler": "DPMSolverMultistepScheduler", "guidance_scale": "7.5" }) }) .then(response => response.json()) .then(data => console.log(data)); ``` ## Links - [Model Playground](https://modelslab.com/models/community-model/hyperfusion-v5) - [API Documentation](https://docs.modelslab.com) - [ModelsLab Platform](https://modelslab.com)