Initial version
Trained using 90 images of 2k or higher and manually created tokens
style settings: batch 4, unet_lr = 0.0001, text_encoder_lr = 5e-5, network_dim = 64, network_alpha = 48, repeats 3, epochs 8 (7th best fit)
images scaled to 768 for full run
images scaled to 512 for second run
resulting LoRAs merged at 1.0 1.0