Main training settings:7+7+12 epochs (The first two training sessions crashed after 8 epochs. Resumed the training from epoch 7 twice.)1 repeat per epochAdamW8bit optimizerLearning rates: U-Net=1e-5; TextEnc=2e-6Scheduler: Cosine with (restarts and) warmup for 10% of the stepsTrain batch size of 4No noise offset8997 imagesBase model: Luminaverse XL v1.0 + FixVAE v2Secondary training settings:25 epochs without crashing1 repeat per epochAdamW8bit optimizerLearning rates: U-Net=1e-6; TextEnc=0 (but no caching)Scheduler: Constant with no warmupTrain batch size of 4Noise offset: 0.03572317 images (a subset of the previous 8997)Base model: The output of the main training