Note: This version still sucks. I don't really know what I'm doing wrong.
Training changes from the previously posted run (v5):
Training Steps: Maximum of
6000>> 10000Base Model Precision:
int8-quanto>> fp8-quantoLearning Rate: 1.0
Optimizer: Prodigy
Batch Size:
1>> 2Gradient Accumulation Steps: 1
Rank:
16>> 8Gradient clipping:
default (probably 2)>> 0.1Caption Strategy:
Triple>> Instance prompt