AstolfoRF (3EP) / AstolfoVL (2.5EP) / AstolfoXL (2EP)
Probably the first (and the only) individual Full Finetuning with multi-GPU and... as open-source as it can (may not be copyleft).
LoKR works, but no thanks. 我不做人了,早苗!
Discord: "Good luck".
Specification
Base model (3EP): AstolfoVL, version 2.5EP (VPRED) RF
Base model (2.5EP): AstolfoKarMix-XL, version Evo-2EP VPRED
Base model (2EP): AstolfoKarMix-XL, version NIL1.5 v1.2
Base model (1EP): AstolfoMix-XL, version 255c
Tech report: ch06
Training metrics (tensorboard): HF
Dataset (images > latents): danbooru2024, e621_2024
Dataset (tags + captions): meta_lat.json
1 step = 16 images, 4x RTX 3090 24G.
778k steps for 1EP, 8.0 + 4.6 = 12.6M images
Tag + NLP caption with A1111 token trick
Trainer codes: The PR won't be merged
Train parameter (1EP, 2EP): adamW8bit, UNET 1.5e-6, TE 1.2e-5, BS4 (4 GPU) grad accu 4, 71% UNET (Speed + must underfit)
Train parameter (2.5EP, 3EP): adamW8bit, UNET 1.5e-5, TE OFF, BS4 (4 GPU) grad accu 4, 100% UNET (finetune to different objective parameters)
75-100+ days for 1EP. Train 1 EP only. Save per 10k steps.
Train result and loss curve: Tensorboard in HF
Core concept: Unsupervised learning
Expectation: MID (100% no filter no quality tag)
or "reality" "golden mean",proven in Public Arena (ELO 1500). Currently in s3.Actual:
Need TIPOnon empty negative prompt
How to use
(Reference only, unchanged since 2025) CFG4, Euler, Shift 3.0.
Train LoRA / merge on top of this model. Compatability should still close to
215cbase model. Realistic human content is still supported. "Trust me bro".Artist tags may not work, but I did trained. Just dump your "NAI" prompts here.
Use TIPO to expand tag based prompts with NLP.
Short tags will suffer from background latent noise. Tags can be observed from E621 or danbooru.
All images are just seen once. There is no task or KPI to chase, or that omniscient state has been archived. The loss curve is as flat as it neither converge or diverge.
Full docuementation will be published, which is as long as the AstolfoMix series.
Description
Switched base model (AK instead of A only).
Stayed EPS training because of technical difficulties.
Did some math to ignore the last model which will OOM (save per 11254 steps instead of 10000).