How to generate images:
Why use the LORA?
Save space: The LORA is much smaller than the full model. So you don´t have to repeat and reapeat downloading checkpoints or unets. The specialiced information added to a base model only needs to stay in a smaller LORA
More flexibility: Experiment with different styles by combining the LORA with other base models (Flux1-dev-fp8 checkpoints). If you use ComfyUi you can make a unet, just use two nodes, LoadCheckpoint to load flux1-de-fp8 and ModelSave to save the unet, just link only the model points from both
Something you can´t do with checkpoints and unets, you can play with the strength of the lora. For some features you can enhance the image, e.g, make a woman more curvy (1.2 is enough)
Using the Flux1.Dev base model you prefer wheather checkpoint or unet and the Finesse LORA together saves you space in disk and makes it easier to experiment with different styles. It's like having a modular system where you can customize your cake with different toppings
This is an attempt to distribute a modification of a basic model in the LORA format instead of a full trained or merged model. Every time we download a trained model, for each model we download we download again: the basic model, the VAE, Clip-L and T5, in total about 17Gb and if you you use unet the penalty in each download is 11 GB. If you believe that GGUF is a solution, the penalty is only reduced by half (5Gb). That is to say that if we download "n" models based on FluxDevfp8 checkpoint , we have a redundancy of n x 17Gb. SSD vendors are very happy and grateful. Using a distribution based on LORAs, you just download your prefered base model Flux1.Dev, with the included or not VAE, Clip-L and T5 only once and then the specific LORA.
The base model to make the sample images was:
https://huggingface.co/lllyasviel/flux1_dev/resolve/main/flux1-dev-fp8.safetensors
If you want an image in 6-8 steps download Bytedance also include in the prompt (strength 0.125),
https://huggingface.co/ByteDance/Hyper-SD/resolve/main/Hyper-FLUX.1-dev-8steps-lora.safetensors
For those who like GGUF, there are several cuantizations versions with 6-8 steps accelerator included
https://huggingface.co/mhnakif/flux-hyp8/tree/main