For business inquires, commercial licensing, custom models, and consultation contact me under [email protected]
Try Juggernaut Z on RunDiffusion
How to Use Juggernaut Z in ComfyUI
Hello everyone,
It has been almost exactly one year since the last Juggernaut version (Ragnarok). After some downtime, TongyLab and BlackForest Lab provided us with two excellent community models (Z-Image, FluxKlein) at the end of last year.
I am presenting the first output of this here: Juggernaut Z, a fine-tune of the Z-Image base.
The checkpoints are available in all standard formats (bf16, fp8, and various GGUF versions). If I missed a specific format you need, just let me know in the comments. Also, a quick heads-up: if you already have the Z-Image Base running on your setup, there is no need to download the Text Encoder and VAE again, you can simply reuse your existing ones
I will keep this brief. This is version 1, so naturally, not everything is perfect yet. It will take 2-3 versions to fully integrate everything we have planned. Please be patient, as work is simultaneously underway on the Juggernaut variants for FluxKlein 4B and 9B.
Our main focus for this first release was killing the glossy "AI plastic" look of the base model. You will notice much better micro-contrasts, actual skin pores, and physically accurate lighting. We also improved the structural logic to reduce hallucinated geometry.
A Quick Word on Prompting: Please actually read the Prompt Guide. If you stick to the old SDXL tag-style prompting (just throwing comma-separated keywords at it), you might get a lucky shot here and there, but you will mostly run into errors and broken concepts. Juggernaut Z thrives on semantic prompting, talk to it in natural, descriptive sentences. If you are not used to this yet, I highly recommend using your favorite LLM (GPT, Gemini, Claude, etc.) to help structure your ideas into proper sentences.
I have included a ComfyUI workflow, which should ensure problem-free generation.
I recommend following Settings with the Workflow:
Resolution: 960*1440 or something similiar with that Pixelsize. "Low Res like 1024*1024" will sometime looks to grainy/noisy
First pass
Steps: 22
Denoise: 1.00
Sampler: Res_2s
Scheduler: Beta
Second pass
Steps: 3
Denoise: 0.15
Sampler: Res_2s
Scheduler: Normal
The foundational workflow was created by nsfwVariant. It was only minimally modified for our purposes, so credit goes to him. Please show him some support.
Also you can find a couple of Comparison Shots in my Profile and also on the RunDiffusion Juggernaut Z Page mentioned above :)
Any LoRA trained on the Z-Image Base is fully compatible with Juggernaut Z. At least, every single one I’ve tested so far worked flawlessly.
Finally, have fun generating. We will be back soon with a FluxKlein fine-tune and Version 2 of Juggernaut Z.
Description
FAQ
Comments (62)
FP16
Added :)
@KandooAI yet important question , does it still hold euler beta or simple decent results ? low res capable ? turbu lora (6~8step) is fine with this model ? we lost any capability of vanilla by any mean ?
@amazingbeauty Euler works for sure, but I wouldn't really recommend it because it tends to smooth out and flatten the details. I’m using Beta in my own ComfyUI workflow as well, so that’s definitely fine and runs without any issues.
Regarding resolution: It can do 1024x1024, but I personally wouldn’t go that route. Lower resolutions often add some grain or noise that doesn't really look good. For Z-Image Base and Juggernaut-Z, I’d actually suggest going higher, somewhere in the 1.5k range, to get the best results.
The distilled 4-8 step LoRAs should work perfectly fine. Basically, any LoRA trained on Z-Image Base should run on Jugg-Z without problems. And about the vanilla capabilities: No, we didn't lose anything. The model still follows prompts just like the base, we just focused on making the textures and lighting better. It's basically an upgrade, not a trade-off.
Thank you for the model—I’ve been eagerly waiting to try it! The results with my old prompts are excellent, and it follows instructions very well. I wish you continued success in your work!
Thank you so much for the kind words! Hearing feedback like this really means a lot to me and shows that continuing to release these versions is exactly the right path. V1 is really just the start: Next up is the FluxKlein fine-tune, and then I'm already moving on to Juggernaut Z V2! :)
Just a quick heads-up: if you ever run into any hiccups with your older prompts down the line, definitely check out the linked Prompt Guide. Have fun generating!
Thank you for making and sharing this. I tried with the default workflow using BF16, and then FP8, and changed no settings or prompts at all. In both cases I only got a gray blob. I had all components already installed (latest RES4LYF etc). Happy to post if you need to see, just don't want to wreck your gallery. Has anyone else experienced this?
I actually ran into similar issues right after training when I started testing. The good news is: it's not a model problem, but a setup issue.
Back then, I just updated ComfyUI to the latest version and disabled Sage Attention, and that fixed it immediately. Alternatively, double-check your VAE (just make sure you are using the specific VAE from the Z-Image Base). Let me know if that works for you!
We had this issue when we started training make sure you update everything. And disable sage attention! :)
@KandooAI Thank you for the tips. Sadly I'm still in the same place. I have latest comfy nightly as of 1h ago. I do have sage attention installed in my venv but not activated globally, and I'm using the exact workflow you posted (no changes whatsoever) so sage is not being activated in that particular workflow.
My comfy startup command for reference:
python main.py --disable-smart-memory
I can re-enabling smart memory but that should not matter in theory.
I am using the exact ae.safetensors from Zimage.
I'm befuddled by this....
I'm on Python 3.13.11, torch 2.13.0.dev20260425+cu130 . I have dozens of other working zimage, klein9b, ernie, ltx, etc. workflows with this same config so am not sure what's going on. I agree with you not a model issue as I see you and others have generated successfully. Hopefully someone can identify what else to check/change.
Quick update to note that re-enabling smart memory did not change the output (but did significantly slow down the process!) . Still gray blobs with no command line arguments when launching comfy.
@EnragedAntelope It's difficult to diagnose this from afar. The only differences I can spot right away are your startup command --disable-smart-memory (which I don't use) and our environments. For reference, I am running on Python 3.12.13 with PyTorch 2.6.0+cu124
@KandooAI Thank you again. Took a bit of trial and error but I am now able to get output after removing https://github.com/WASasquatch/RES4SHO . I also had res4lyf nodes so maybe there was a conflict, but I notice res4sho modifies beta scheduler so that may have been root cause.
Thanks again for the model, and for trying to help troubleshoot!
@EnragedAntelope Please share some of your images you create :)
diffusers
does it do nsfw?
Yes, it can absolutely do NSFW (Female). The key is to prompt semantically, traditional tag-style prompting won't work very well here. That being said, NSFW was not the main focus for V1, which is why I didn't flag it as an NSFW model. It will become more stable and cover a wider range in upcoming versions (like JuggernautXL). In the meantime, you can just use any of the existing NSFW LoRAs for the Z-Image base. They should work without any issues.
Got it. Thanks for replying. Looking forward to coming versions.
Is this checkpoint compatible with the alibaba-pai/Z-Image-Fun-Lora-Distill LORAs?
Yes that should work fine :) Every LoRA that is trained on Z-Image Base should just work fine with Jugg-Z
@KandooAI Do I need to use your specific text Encoder and vae or can I use the default one for ComfyUI's official Z Imege workflow.
@AnimaXx You can use the normal VAE and Text Encoder that comes with Z-Image Base :)
does this work on webui forge neo, etc.? I'm getting ghost/spirits, blurry, smeared, etc type photos.
I personally don´t use Forge Neo, so cant really help here. But found some Reddit Post from a while Ago. He had Problems with Z-Image Base and Forge Neo and he found a solution for himself. That Solution maybe works for Juggernaut-Z on Forge too. At least worth a try
Another user said they were able to use the GGUF version on Forge Neo
@KandooAI Hi, that's just on z-image Base because base gives you less details so the extra stuff is just to get the photos generated. DPM++ 2M doesn't work on Neo for me. I'll try it again next time around.
@Colorblind_Adam some GGUF works for me and some don't. I think its because the bits on the gguf and the vae doesn't match up, etc.
It works like any ZBase checkpoints. Your problem connected with wrong settings.
Check:
Sampler/Scheduler: Euler/Simple (or beta/SGM_uniform other don't work)
Shift: 3.5 (Neo has 6 by default and it's too large for ZBase)
Steps: >30, 50 for good results.
Also ZBase by design gives better results and quality with NLP (use any LLM you like to improve your prompt, don't rely on the intergrated one). Here one user suggests a tool for prompting but you can simply copy-paste a template in any chat bot you like (one time is enough, in next messages you just send to AI your prompts you need to improve).
@mphobbit I got it to work. I swear, I saw that it was designated as a zimage turbo when it first released. But either way, the settings you mentioned I was already using all along. It worked after I increased the cfg and put in the negative prompt provided.
this model doesn't work with forge neo. 😢
oh...no....
Use the GGUF that this one works. Just tested.
@Dlzeze Thank you for this information we'll add it.
Don't know why, it works like any ZIB checkpoint. I use FP8
I've checked the FP8, FP16 and BF16 versions in Forge Neo and all work on my end.
If it generates a black image for you make sure you disable Sage Attention in Forge Neo settings.
Just checking off the "Use SageAttention" check box in the settings is not enough. Make sure to type "--disable-sage" in the Extra Launch Arguments line at the bottom if you use Stability Matrix and then re-launch it.
Btw, this is true for all Z Image Base models, not just this specific fine tune. Just keep it in mind if you switch between different models a lot.
Thank you for this - the performance with some ZiB character Loras and LOKRs I trained myself is incredible - it reached levels of likeliness I had never seen before - great job and looking forward for v2, v3 etc (hopefully with a bit better NSFW ;-) - keep up the great job!
Thanks a lot for the kind words! :) I only did some basic testing with LoRAs before the release (it’s impossible to test everything, haha), so I’m really glad to hear your trained LoRAs are working that well. During my own tests, I also had the feeling that LoRAs perform exceptionally well, but since I hadn't tested it extensively enough, I didn't want to make it a big point in the description yet. ^^
For Version 1, getting the basics right was more important to us than NSFW. We wanted to improve the core fundamentals first—especially the lighting—to have a solid foundation for future versions. But yeah, I’ll definitely be working on that in the upcoming versions to get the body anatomy up to the level of Juggernaut XL. It’s definitely on the to-do list! :)
It works well with ZIB character LoRAs and the 4-step distilled LoRA. I’d like to know how to use this model to train a character with AI Toolkit on RunPod.
You need the Diffuser Format for it. Gonna upload it next week :)
If you don´t wanna wait: Somebody posted this Experimental Fork of AI Toolkit with Juggernaut-Z Support
I wanted to add to my previous post regarding this model. Honestly, I never doubted that your development process and the final result would be a guarantee of quality. It handles different styles perfectly without LoRAs and understands characters very well. Issues with limbs or overall composition are rare.
I can say that I haven’t really used a base model before because generating images on 8GB of VRAM wasn’t fast enough, but with this one, I’m happy to wait. I did try using a LoRA at 8 steps, but it ruined the character’s hair, making it look dirty and wet.
Thanks again for this wonderful model. I originally started working with your SDXL v6 model back in the day, and now I’ve returned, this time to ZIB.
Thanks for the amazing feedback, really appreciate it! ;) I've seen your images in the gallery and it's always great to see what users are creating with the model. Also, it’s awesome that I could pull someone from the "old" SDXL days over to Z-Image. More community support is exactly what Z-Image and Flux Klein need right now.
And yeah, using those 4-8 step LoRAs definitely kills the quality, it’s a classic tradeoff. I’m planning to do a Turbo version for Jugg-Z eventually, but that’ll probably take a while.
With Version 1, the goal was to make sure it produces fewer errors than the Z-Image base, so the wait time actually feels worth it and you don’t have to keep rerolling all the time.
@KandooAI hope to your turbo version
I highly recommend this model, it's very versatile and one of the best Z-Image Base model I've seen.
Thank you very much for your kind Words and your awesome Images :)
The beginning of a new era. Looking forward to trying this one!
Any chance to see nvfp4 version?
I will take a look into it next week :)
which version do i download? bf16 or fp16? which gives best results
If you are not using it to train lora, fp16 will do
If your graphics card has less than 16GB of memory, fp8 is more suitable
This is a very good Zib model; those using a double sampling workflow shouldn't miss it!
非常不错的zib模型,使用双采样工作流程的不要错过!
Yours is the best Zib model I've found. Please consider making penis lora to go with your checkpoint because I haven't found one good penis lora and I've tried them all.
As an obvious man of culture I know one of your focal points must be good skin. Any chance you got any tips to get that with this model?
Here are all Penis Lora´s for Z-Image Base:
https://civitai.red/search/models?baseModel=ZImageBase&sortBy=models_v9&query=Penis
Isnt there a good one ? I didnt test them to be honest
@KandooAI I've tried all of them and they aren't good. You've always done great work on your Juggernaut models so if anyone could create a good penis lora that works well, it would be you.
@fanboytunes0789 KandooAI would be a better person to ask than me. I wish I could help you.
@Bobit No worries. :) Thanks!
Question:
Hey everyone,
I’ve spent the last few days grinding on a high-speed version of Juggernaut-Z, and I think we’ve finally landed on a solid checkpoint. The goal was maximum efficiency without losing the Juggernaut soul.
You can check out the first batch of sample images here: https://civitai.red/posts/28552509
Technical Specs:
Steps: 5 (Optimized for a 3-6 step range)
CFG: 1.0 (Works reliably between 1.0 and 2.0)
This version is designed specifically for rapid iterations and high-speed workflows. Naturally, there is a minor trade-off in fine detail compared to the full-step model—as was expected—but for the speed you're getting, it definitely hits the mark.
I wanted to gauge your interest: Does this quality meet your standards for a turbo release? Should I polish this up and upload it for you in the coming days?
Let me know what you think!
Thanks for all the effort you put into training this! It’s a huge contribution to the community. The results are definitely hitting the mark for a first iteration for a turbo model.
If you’re looking for areas to tweak for the next update, I noticed the backgrounds sometimes lack a bit of fine detail, giving them a slightly 'rendered' or plastic look (like in the brick road photo,and the black person photo). Some images also come out a little high on the contrast and saturation side.
It’s already looking pretty good though, and I really appreciate you making this. Looking forward to what’s next!
Details
Files
juggernautZ_v10ByRundiffusion_txt.safetensors
Mirrors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen3-4b.safetensors
qwen.safetensors
qwen_3_4b.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b_bf16.safetensors
qwen_3_4b_bf16.safetensors
qwen_3_4b_bf16.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_merged_text_encoder.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
zImageTurbo_textEncoder.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b.safetensors
qwen.safetensors
zImageTurbo_textEncoder.safetensors
qwen_3_4b.safetensors
Z-Image_qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
zImageTurbo_textEncoder.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
model.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
zImageTurbo_textEncoder.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
zImage_textEncoder.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
zImageTurbo_textEncoder.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
text_encoder-qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
Z-Image_qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b (1).safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
zImageTurbo_textEncoder.safetensors
qwen_3_4b.safetensors
zImageTurbo_textEncoder.safetensors
qwen_3_4b.safetensors
TEXT_TO_IMAGE_qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
text_encoder.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b-bf16.safetensors
qwen_3_4b.safetensors
qwen_3_4b.safetensors
qwen_3_4b-bf16.safetensors
juggernautZ_v10ByRundiffusion.gguf
Mirrors
juggernautZ_v10ByRundiffusion.gguf
Mirrors
juggernautZ_v10ByRundiffusion.safetensors
Mirrors
juggernautZ_v10ByRundiffusion.gguf
Mirrors
juggernautZ_v10ByRundiffusion.gguf
Mirrors
juggernautZ_v10ByRundiffusion.safetensors
Mirrors
juggernautZ_v10ByRundiffusion.gguf
Mirrors
ae.safetensors
Mirrors
ae.safetensors
ae.safetensors
ae.safetensors
fluxVae_v10.safetensors
ae.safetensors
ae.safetensors
prspctivsInterior_v1.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux1-dev-ae.safetensors
ae.safetensors
aez.safetensors
zae.safetensors
ae.safetensors
flux-vae-dev.safetensors
ae.safetensors
FLUX_VAE.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
ae.safetensors
variational_encoder_primary.safetensors
vae.safetensors
ae.safetensors
ae.safetensors
Zimage-vae.safetensors
ae.safetensors
z_image_vae.safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux-vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
zae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae_fp32.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
fluxvae1dev.safetensors
zImageBase_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
zImageTurbo_vae.safetensors
ae.safetensors
ae.safetensors
aeea.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_dev_vae.safetensors
flux_schnell_vae.safetensors
lyhAnimeFlux_v4Niji_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux-vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux1vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
variational_encoder_primary.safetensors
ae.safetensors
ae.safetensors
ae_vae.safetensors
zImage_vae.safetensors
FLUX.1-AE_fp32.safetensors
FLUX.1-AE_fp32.safetensors
flux1-VAE.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux1-dev-ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae (2).safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
fluxVae_v10.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
flux_vae.safetensors
ae.safetensors
zImageTurbo_vae.safetensors
ae.safetensors
ae(1).safetensors
zImageTurbo_vae.safetensors
z_image_vae.safetensors
ae.safetensors
ae.safetensors
flux_kontext_ae.safetensors
ae.safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
ZIT_zImageTurbo_vae.safetensors
ae.safetensors
vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_ae.safetensors
z_image_ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
Flux VAE (ae).safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux1-dev-ae.safetensors
flux_vae.safetensors
flux1-dev-ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
fluxvae1dev.safetensors
zImageBase_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
variational_encoder_primary.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
diffusion_pytorch_model.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae(z_image).safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
vae.safetensors
vae_fluxsigmaf16.safetensors
ae.safetensors
ae.safetensors
Flux_ae.safetensors
ae.safetensors
zimage_ae.safetensors
zimage_ae_red.safetensors
ae.safetensors
ae.safetensors
RedCraft_VAE.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
zImageTurbo_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
zImage_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
zImage_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_ae.safetensors
vae_fp16.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
aeZimgturbo.HB0H.safetensors
ae.safetensors
zImageTurbo_vae.safetensors
vae.safetensors
z_image_turbo_vae.safetensors
zImageBase_vae.safetensors
ae.safetensors
ae.safetensors
zImageTurbo_vae.safetensors
zImage_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
FluxVae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
z_image_vae.safetensors
ae.safetensors
ae.safetensors
ae flux.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
zImageTurbo_vae.safetensors
zImageTurbo_vae.safetensors
z_image_turbo_ae.safetensors
ae.safetensors
VAE-ae.safetensors
ae.safetensors
Flux.1 VAE [FP32].safetensors
ae.safetensors
flux_fill_dev_ae.safetensors
flux_vae.safetensors
ae (1).safetensors
ae.safetensors
vae-flux_1-fp32.safetensors
ae.safetensors
z_image_turbo_ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
zImageTurbo_vae.safetensors
ae.safetensors
z_image_vae.safetensors
ae.safetensors
flux_vae.safetensors
variational_encoder_primary.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
VAE.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
Vae-flux.safetensors
ae.safetensors
ae.safetensors
zImageBase_vae.safetensors
ae.safetensors
ae.safetensors
z-image-vae.safetensors
diffusion_pytorch_model.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
z_image_vae.safetensors
vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
fluxvae.safetensors
ae.safetensors
zimage.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux-vae-fp32.safetensors
variational_encoder_primary.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
zImageBase_vae.safetensors
zImageTurbo_vae.safetensors
zImageTurbo_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
zImage_vae.safetensors
TEXT_TO_IMAGE_ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux1-vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae (1).safetensors
flux_vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_kontext_ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
flux_vae.safetensors
zimageae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
Zimage-vae.safetensors
ae.safetensors
ae.safetensors
ae.safetensors
juggernautZ_v10ByRundiffusion.gguf
Mirrors
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


















