This resulted from a bout of curiosity as some people claim Ia3 is the best thing since toasted bread. The result? It is an odd little thing emphasis in "Little" 218KB is comparable to a TI. You can compare against the LORA version or the LOCON version.
Image wise it looks as if you had pulled out all the influence of the model out of a LORA, In this particular case, as most of the dataset consisted of monochrome doujin and colored ones the resulting images in fact look rather like colored doujin images! That's terrible and incredible at the same time.
Here's my analysis of pros and cons
Pro:
Very small
very quick to train with prodigy it requires 250~300 steps per epoch per concept for 8 epochs to give a decent result.
I suspect it works extremely well with mono source homogeneous datasets like screencaps without burning.
Seems to be great at retaining style.
Portability between models is not as bad as reported, It is inferior to LOCON but it is on par or slightly lower than LORAs.
Cons:
Feels slightly less responsive than LORA/LOCON to tags not in the dataset.
It is more sensitive to shortcomings of the dataset that normally get filled by the model(missing angles, low hair detail, fabric textures, low color, blurriness)
It won't express the model style strongly making it more neutral but also slightly flavorless.
Conclusion:
Ia3 seems to be perfect to make quick ok-ish LORA, for prototyping to quickly test the viability of a dataset and for beginners who don't care about things being "too exact" or don't care for their images to follow a particular style.
I will add the dataset, training TOML file and an XYZ file comparing the results of 6 different models.

Btw got good results using: <lora:yoshizuki_ioriV4:.8> yoshizuki_iori,
The only tagged outfit is:
school_uniform_purple_shirt_blue_skirt_white_neckerchief_black_thighhighs
Description
Trained using prodigy and type Ia3 using the following parameters:
[general_args.args]
pretrained_model_name_or_path = "D:/wifediff/stable-diffusion-webui/models/Stable-diffusion/anything-v4.5-pruned.safetensors"
mixed_precision = "fp16"
seed = 80085
clip_skip = 2
xformers = true
max_data_loader_n_workers = 1
persistent_data_loader_workers = true
max_token_length = 225
prior_loss_weight = 1.0
max_train_epochs = 8
training_comment = "Trigger: yoshizuki_iori, School_uniform_Purple_shirt_Blue_skirt_White_neckerchief_Black_thighhighs"
cache_latents = true
[general_args.dataset_args]
resolution = 512
batch_size = 1
[network_args.args]
network_dim = 32
network_alpha = 16.0
[optimizer_args.args]
optimizer_type = "Prodigy"
lr_scheduler = "cosine"
learning_rate = 1.0
lr_scheduler_type = "LoraEasyCustomOptimizer.CustomOptimizers.CosineAnnealingWarmupRestarts"
lr_scheduler_num_cycles = 1
min_snr_gamma = 5
[saving_args.args]
output_dir = "D:/wifediff/lora/iorilora"
save_precision = "fp16"
save_model_as = "safetensors"
output_name = "yoshizuki_ioriV4"
tag_occurrence = true
save_toml = true
save_every_n_epochs = 1
[bucket_args.dataset_args]
enable_bucket = true
min_bucket_reso = 256
max_bucket_reso = 1024
bucket_reso_steps = 64
[network_args.args.network_args]
conv_dim = 32
conv_alpha = 16.0
algo = "ia3"
[optimizer_args.args.lr_scheduler_args]
min_lr = 1e-6
gamma = 0.9
[optimizer_args.args.optimizer_args]
weight_decay = "0.01"
decouple = "True"
use_bias_correction = "True"
safeguard_warmup = "True"
d_coef = "2"