Greg Rutkowski style LoRA for Anima. Trained on preview3. Prefix prompt with "@greg rutkowski. " Natural language prompts work best.
All training data and diffusion-pipe training config files are shared in the attached ZIP file. If you just want the config files: training config, training dataset config, "eval" dataset config (stabilized training loss).
Captioning script: link.
Dataset construction process:
Select and download images manually from the internet.
Use Flux 2 Klein 9b ComfyUI workflow to batch process the whole folder and remove watermarks.
Caption with Gemma 4 31b. If you have less VRAM, JoyCaption or one of the medium sized Qwens would work almost as well.
Main training parameters:
Rank 32 LoRA
Global batch size 4
Don't train LLM adapter (llm_adapter_lr=0)
AdamW optimizer, 2e-5 LR, betas=[0.9, 0.99]
Mixed res training on [512, 1024, 1536]
[512, 1024] mixed is always good, 1536 is optional but helps acquire details for this particular style.
sigmoid_scale=1.3, optional but helps with fine details by shifting the timestep distribution towards something that is slightly more uniform than standard logit normal.
This version corresponds to 40 epochs (120 passes over the data when considering the 3 resolutions)
Description
FAQ
Comments (18)
Thanks for sharing some official Lora training params! You guys rock :D
Many thanks for the Lora. A question, did you caption the dataset image per image with Gemma 4 31B, or did you batch tagging with a software with that LLM? Thanks.
It's a simple Python script that uses vLLM to batch caption a folder. I added a link to it.
@circlestone_labs Many thanks for the script!
King
Awesome! thanks for training data i cant wait to try it asap.
Thanks so much for this <3! can you also give some guidance on training characters and concepts with respect to Anima? How much different would it be from the parameters you have chosen for the style here?
Wow this a top quality lora
Looking forward to a 'trending on artstation' LoRA :v
thanks, but at least clean your dataset from watermarks.
also some images like "image_00010.png" looks like traditional oil painting with heavy impasto texture.
and the other like "image_0008.png" feels like modern digital illustration with realistic rendering.
In my opinion, these differences could negatively impact the LoRA, so I would avoid training something like that.
Thanks for the recommended settings and dataset. But I wouldn't use the author's version entirely. Firstly, he trains on diffusion-pipe, which is fully compatible with Linux; you're more likely to have problems on Windows. But even with this version, my tests showed that disabling LLM, which the author uses here, leads to dire consequences. I trained using LoRA_Easy_Training_Scripts on Windows; perhaps something breaks when disabling it, but the results were broken. I recommend using at least the settings in this author's version: https://civitai.com/models/2425904?modelVersionId=2854130
Gemma 4 also does a pretty good job, but it needs signature examples, and I think the best option is the way the model was most likely trained: danbooru tags, and then a natural, dry signature, which is almost like glue, holding danbooru tags together into sentences (don't forget that it's better not to use shuffle in this version, or do it correctly, otherwise, in my case, the words will be mixed up by commas (If you really want shuffle, you can remove all the commas in the natural caption part, it's not that bad, the model will still understand what's going on.))
i have been running diffusion-pipe in wsl2 under windows for a number of years now. no worries. simple cmdline interface, good tensorboard support for monitoring training progression. don't need a lot of linux knowledge to setup and use.
@tedbiv I should have mentioned that we're talking about pure Windows. A Linux emulator is almost the same as installing Linux directly, but my Windows is apparently broken. WSL refuses to install, and the necessary services won't start. So I actually tried installing it on Windows. (Don't worry, I'll reinstall Windows later anyway.)
@happyhen actually it is installing linux. part of setting up wsl env is installing ubuntu. it just runs in a windows virtual machine instead of natively.
Diffusion-pipe does work on WSL2 in my experience but something with the pinned memory makes it very annoying to use. If you use blocks to swap or unsloth checkpointing it immediately throws an OOM error; I also think if your VRAM spills over into RAM too much from something like high batch size it'll also crash. It's way easier to just dual boot Ubuntu with it
can give the Flux 2 Klein 9b ComfyUI workflow?
Sharing this kind of datasets and config files is very helpful as it serves as a reference for production. Thank you.
thanks for posting this. love, love, love diffusion-pipe. just updated to pick up anima support. also loving anima preview 3. thank you for your work.




