This is a workflow based on the Z-image model that can train a style Lora using just a few images of the same style. It's similar to Midjourney's sref style codes, but compared to traditional Lora training, it can produce a Lora with the same or similar style in a very short time using a small dataset. For important details, please see the testing and usage instructions below.
💻I've already set up the complete ➡️ workflow for you, from training to generating images. Just click the link to use it online and download your trained Lora.
🎁Bonus: If you're signing up for the first time, you can get 1,000 free RH Coins by using my link and the invite code ➡️rh-v1182. Plus, you'll get another 100 RH Coins for logging in daily.
🚀Workflow Testing and Instructions:
1. I tested this process on RunningHub, and the results show that it can't train every single style of Lora. In my tests, for example, it couldn't replicate motion blur or line art styles. If you want to try this locally, you can deploy the model and nodes using these links: DiffSynth-Studio/Z-Image-i2L and ComfyUI_RH_ZImageI2L.
2. The images in your training set must all be in the same style. You need at least 4 images, but my workflow defaults to using 8. Based on how the Z-image model works, I recommend using images that are around 1 megapixel.
3. If the style isn't showing up clearly in your generated images, try adjusting the Model Strength in the LoraLoader node—usually between 1.0 and 1.5 works best. If the image isn't following your prompt, try slightly increasing the Clip Strength in the LoraLoader until you get the balance you want.
4. Once you've finished training a style Lora, you can download it from the "Task List" on the right (see the screenshot for the exact location).



