This ComfyUI workflow is designed for Z-Image-i2L, also known as Image to LoRA. The main purpose of this workflow is to let creators quickly train a lightweight LoRA from a small group of reference images, save the generated LoRA, and immediately test it inside the same ComfyUI graph with Z-Image Base.
Unlike a traditional LoRA training workflow that requires dataset preparation, caption files, training scripts, optimizer settings, command-line configuration, and manual model loading, this workflow is designed as a fast and practical image-to-LoRA pipeline. The user only needs to provide several reference images, run the i2L generation node, save the LoRA, and then test the newly generated LoRA through a normal Z-Image generation route.
The workflow is built around the RunningHub Z-Image-i2L node system. It uses RunningHub_ZImageI2L_Loader to load the Image-to-LoRA pipeline, RunningHub_ZImageI2L_LoraGenerator to generate a LoRA from the uploaded training images, and RunningHub_ZImageI2L_Saver to save the generated LoRA file. This makes the workflow much more convenient for creators who want to quickly capture a character style, object style, visual identity, costume concept, creature design, or artistic direction from a few images.
The training input section uses multiple LoadImage nodes and ImageBatchMulti. In the uploaded setup, the workflow accepts six image inputs and combines them into one image batch. These images become the training references for the Image-to-LoRA generator. This is useful because a single image may not be enough to define a stable visual concept. Multiple images help the i2L pipeline understand the repeated features across the references, such as face shape, clothing style, color theme, character identity, object design, or general aesthetic.
The ImageBatchMulti node is important because it merges the reference images into one training batch. The workflow is configured with an input count of 6, which means users can provide a small set of images without preparing a full dataset folder manually. This is suitable for quick creator testing, lightweight character adaptation, concept extraction, and rapid LoRA prototyping.
The i2L generation stage uses RunningHub_ZImageI2L_LoraGenerator. This node receives the loaded ZImageI2LPipeline and the batched training images, then generates a LoRA name and LoRA path. In the included setup, the seed is fixed, which helps make the LoRA generation process more reproducible. Users can change the seed if they want a different training result or want to test variation between generated LoRAs.
After the LoRA is generated, the workflow uses RunningHub_ZImageI2L_Saver to save the result. The filename prefix is set to zimage_lora, making it easy to identify the generated file. The LoRA path is also previewed through PreviewAny, so users can confirm that the LoRA has been created and passed forward correctly.
The workflow also includes easy cleanGpuUsed between the i2L pipeline and LoRA generation output. This is useful because training-like nodes and generation nodes can consume GPU memory. Cleaning GPU memory after the i2L stage helps make the later test generation stage more stable, especially in online or cloud environments.
The second half of the workflow is the testing stage. After the LoRA is generated, it is loaded into the Z-Image generation route through LoraLoader. This means users do not need to manually move the LoRA file, restart ComfyUI, or create a separate test workflow. The LoRA can be trained and tested in one continuous graph.
The testing route uses Z-Image Base with z_image_bf16.safetensors as the main diffusion model, qwen_3_4b.safetensors as the text encoder, and ae.safetensors as the VAE. The generated LoRA is applied through LoraLoader with model strength around 1.1 and clip strength around 1.0. These values are useful for testing whether the LoRA strongly affects the output without completely overwhelming the base model.
The prompt test section uses CLIPTextEncode with a simple Chinese prompt: “一个武士鬼怪正在与恶鬼战斗,” meaning a samurai ghost or monster fighting an evil demon. This kind of prompt is useful for testing whether the generated LoRA can influence character identity, creature design, costume style, visual tone, or image composition in a new generation.
The negative prompt suppresses common visual problems such as yellowed output, green tint, blur, low resolution, low quality, distorted limbs, eerie appearance, ugly results, AI-looking artifacts, noise, grid-like artifacts, JPEG compression artifacts, abnormal limbs, watermark, garbled text, and meaningless characters. This is practical for LoRA testing because newly generated LoRAs can sometimes introduce artifacts, overfitting, or unstable details if the reference images are inconsistent.
The generation canvas is created with EmptySD3LatentImage at 1024 x 1536. This vertical format is suitable for character testing, portrait-style LoRA previews, concept art outputs, Civitai examples, RunningHub demos, and social media cover-style images. Users can adjust the size depending on whether they want portrait, square, or landscape results.
The sampling stage uses KSampler with 50 steps, CFG 4, res_multistep sampler, simple scheduler, and full denoise. This is a relatively strong test route. Because the purpose is to evaluate a newly generated LoRA, using a higher step count can help reveal whether the LoRA is stable, expressive, and compatible with the base model. The output is decoded through VAEDecode and saved through SaveImage.
This workflow is especially useful for rapid LoRA prototyping. Instead of spending a long time building a formal dataset, users can upload a small group of images and quickly generate a LoRA for testing. This is helpful for AI creators who want to test character concepts, style references, object concepts, mascot designs, fantasy creatures, costume sets, or visual branding ideas.
It is also useful for RunningHub online workflow publishing. Many users want to experience LoRA creation without installing a full local training environment. This workflow lowers the barrier by packaging the training and testing process into one graph. Users can upload reference images, generate a LoRA, and immediately see whether it works.
Main features:
- Z-Image-i2L Image to LoRA workflow
- Fast LoRA generation from reference images
- Supports multiple training image inputs
- Uses ImageBatchMulti to combine six reference images
- RunningHub_ZImageI2L_Loader pipeline loading
- RunningHub_ZImageI2L_LoraGenerator LoRA creation
- RunningHub_ZImageI2L_Saver LoRA export
- PreviewAny output for generated LoRA path checking
- GPU cleanup support through easy cleanGpuUsed
- Immediate LoRA testing inside the same workflow
- Z-Image Base test generation route
- Uses z_image_bf16.safetensors
- Uses qwen_3_4b.safetensors text encoder
- Uses ae.safetensors VAE
- LoraLoader test route with adjustable model and clip strength
- 1024 x 1536 vertical test canvas
- KSampler test generation with res_multistep
Recommended use cases:
Fast LoRA training, Image to LoRA testing, character LoRA prototyping, style LoRA creation, object concept extraction, fantasy character adaptation, creature design testing, costume style capture, visual identity transfer, Civitai LoRA preview creation, RunningHub online LoRA workflow publishing, prompt testing with newly generated LoRA, and lightweight AI model customization.
Suggested workflow:
Start by preparing several reference images. Use images that share the same concept. If you are training a character LoRA, the images should show the same character or very similar identity. If you are training a style LoRA, the images should share the same visual style. If the images are too different, the generated LoRA may become unstable or unclear.
Use clean images whenever possible. Avoid heavy watermarks, text overlays, excessive compression, very low resolution, strong motion blur, and unrelated background clutter. The i2L pipeline can work quickly, but the quality of the input images still matters. Better references usually create a more useful LoRA.
Load the reference images into the six LoadImage nodes. The workflow batches them through ImageBatchMulti. You can use fewer or more references if the workflow is adjusted, but the uploaded version is structured around six input images. Six images are enough for a quick test while still giving the LoRA generator more information than a single reference.
Run the RunningHub_ZImageI2L_LoraGenerator section to generate the LoRA. Keep the seed fixed if you want repeatable output. Change the seed if you want to test different LoRA generation results from the same reference images.
Save the LoRA through RunningHub_ZImageI2L_Saver. The saved file uses the prefix zimage_lora. Check the PreviewAny output if you want to confirm the LoRA path. This helps verify that the generated LoRA is properly passed into the testing section.
Use the testing section immediately after LoRA generation. The LoraLoader loads the generated LoRA into Z-Image Base. Start with moderate LoRA strength. If the LoRA effect is too weak, increase the model strength slightly. If the output becomes distorted or overfitted, reduce the strength.
Write a prompt that tests the LoRA clearly. If the LoRA is meant to capture a character, write a prompt that asks for that character in a new scene. If it is meant to capture a style, use a subject that lets the style appear clearly. If it is meant to capture an object, describe that object directly and test whether the generated image keeps the reference features.
Use the negative prompt to suppress LoRA artifacts. Newly generated LoRAs may create color cast, grid artifacts, strange limbs, low-quality details, or unwanted text if the input images contain those issues. Add targeted negative terms if specific problems appear.
Use the 1024 x 1536 vertical canvas for character testing. This format is good for preview images and Civitai examples. For object or landscape testing, adjust the latent size to match the target composition.
Run several test generations. A single image is not enough to judge a LoRA. Test different seeds and prompts. A good LoRA should influence the output consistently without destroying anatomy, composition, or general image quality.
If the LoRA does not capture the concept well, improve the reference set. Use cleaner images, more consistent subject views, better lighting, and fewer unrelated background elements. The fastest way to improve an i2L result is usually to improve the input references.
This workflow is designed for creators who want a fast, practical, and online-friendly Z-Image LoRA creation pipeline. It combines image batching, automatic LoRA generation, LoRA saving, GPU cleanup, LoRA loading, and Z-Image Base test generation into one graph. It is especially useful for quickly turning reference images into a usable LoRA prototype and testing the result without leaving ComfyUI.
🎥 YouTube Video Tutorial
Want to know what this workflow actually does and how to start fast?
This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.
Everything starts directly on RunningHub, so you can experience it in action first.
👉 YouTube Tutorial: https://youtu.be/wT9ob7rFONM
Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.
⚙️ RunningHub Workflow
Try the workflow online right now — no installation required.
👉 Workflow: https://www.runninghub.ai/post/2023308170269036545/?inviteCode=rh-v1111
If the results meet your expectations, you can later deploy it locally for customization.
🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!
📺 Bilibili Updates (Mainland China & Asia-Pacific)
If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.
📺 Bilibili Video: https://www.bilibili.com/video/BV1qXZMBwEC7/
☕ Support Me on Ko-fi
If you find my content helpful and want to support future creations, you can buy me a coffee ☕.
Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.
👉 Ko-fi: https://ko-fi.com/aiksk
💼 Business Contact
For collaboration or inquiries, please contact aiksk95 on WeChat.
🎥 YouTube 视频教程
想了解这个工作流到底是怎样的工具,以及如何快速启动?
视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。
我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。
👉 YouTube 教程: https://youtu.be/wT9ob7rFONM
开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。
⚙️ 在线体验工作流
现在就可以在线体验,无需安装。
👉 工作流: https://www.runninghub.ai/post/2023308170269036545/?inviteCode=rh-v1111
打开上方链接即可直接运行该工作流,实时查看生成效果。
如果觉得效果理想,你也可以在本地进行自定义部署。
🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!
📺 Bilibili 更新(中国大陆及南亚太地区)
如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。
📺 B站视频: https://www.bilibili.com/video/BV1qXZMBwEC7/
我会在 夸克网盘 持续更新模型资源:
👉 https://pan.quark.cn/s/20c6f6f8d87b
这些资源主要面向本地用户,方便进行创作与学习。
