The most useful versions of GGUF(+the FP8 version)
EDIT:
*GGUF Workflow, since i can't upload the .json here: https://www.kombitz.com/2025/04/17/simple-comfyui-workflow-for-hidream-i1-gguf/
*missing nodes? run Comfy update\update_comfyui.bat
*quality comparison here: https://artificialanalysis.ai/text-to-image/model-family/hidream#quality
Lighter versions of the Full HiDream quantized by city96
download and use locally. nodes here:
https://github.com/city96/ComfyUI-GGUF
vae and text encoders here:
https://huggingface.co/Comfy-Org/HiDream-I1_ComfyUI/tree/main/split_files
Description
FAQ
Comments (19)
No hate to this model specifically, but I just came to the conclusion that I don't like HiDream.
After spending a few days completely wiping all traces of Python on my PC just to get a clean start, I reinstall everything, CUDA, Python, Comfy, the works... I copy every setting that is provided, and after wasting all this time and all that space, I get a corrupt image in return. Not to mention it took 3 minutes to get that corrupt image, jesus christ!
Then I decide to get a smaller model, the DEV Q3 one just to get some speed and perhaps have some better luck with the LCM sampler but nope, I get a solid pink picture in return.
Now I'm done with HiDream, I'm going back to SDXL where it takes 20 seconds to generate a perfectly fine picture, thank you.
well the rest of us don't get pink pictures so... don't blame hidream
@andux I'd rather blame HiDream because if I blame myself I'm only confirming that I'm stupid, and that's pretty stupid :P
Like, I don't know what else I can do.
I have the latest GPU drivers, I have the proper Python and it's working, I've got the latest CUDA drivers and it works, I have the latest ComfyUI update, all other (non-hidream) models work, I've followed the workflow settings and where to download the VAE and Text Encoders to the letter.
The only thing left is to reinstall Windows but I'm not doing that, yet.
@punkbuzter340 search and gpt the error, dunno, if we'd want it easy we'd install an app and pay
@andux Yeah, another day perhaps.
But ey, I gave your model-card a thumbs up anyway. Even if it didn't work for me it was pretty damn good in my mind.
Just started using SwarmUI and I’m really impressed-it has native HiDream support and combines the user-friendliness of Automatic1111 with the power of ComfyUI. You can actually open up the ComfyUI workflow for any image you’re generating right from the generator tab, so you get to see exactly what’s happening under the hood and learn ComfyUI as you go, without losing productivity. The install process was smooth for me-didn’t need any special sysadmin tricks (I just symlinked my model folders, but that’s optional and not a SwarmUI issue). Highly recommend for anyone looking for a flexible, all-in-one Stable Diffusion interface!
@Epidural I'm not a complete noob :)
I've been working with Comfy for a long time, I even wrote a 1-click-installer script myself for the non-portable version including all the nodes I use as default as an ease-of-use way of reinstalling, that's how much I've used Comfy :P
The problem is that sometimes whenever the planets doesn't align, somewhere a bit of code gets in a twist and things just won't work. Probably due to something being incompatible with a version of something, perhaps my non-portable version is being too updated as I get the latest stuff the way I'm installing it, I don't know how often ComfyAnon repacks his zipped package or how often other people get their updates, so it might as well be that people develop their nodes for that version and not updating them until ComfyAnon repacks his zipped package.
From my observation though, I've seen people having more problems with the portable version than with the non-portable, and I wouldn't trade their issues with my own. Because from what I can tell, HiDream is pretty slow, I actually enjoy the speed of SDXL still, and it hasn't been fully uncensored yet so not a big loss ;D
I got it working. It takes me 8 mins on a 12GB card for a 1024px^2 picture. It follows instructions the best, but it's so slow it's probably easier to make a lora on something else.
@MagicalErotica just give some prompts and start a movie, that's what i do
@andux Okay, but honestly, flux gets nice pictures as well, and I can train a lora with my old gpu.
@MagicalErotica From what I've heard, people say SDXL is more creative than Flux, especially for ethnicity, faces and expressions, and from personal experience I think 1.5 is even more creative.
I used to generate txt2img with 1.5 and run a few img2img batches for the quality with SDXL. It takes me 10 seconds to generate an image with XL and that's on a 3060.
I don't make any money from these images, and I bet most people don't either, so I don't see any point or reason to waste all that time on Flux, 99.9% of all the images goes to the trashbin anyway. \o/
I mean, if I'd wanna be effective with Flux I'd do what I used to do with 1.5... Use SDXL for txt2img and do an img2img batch with Flux. But again, is it worth the time.
@MagicalErotica flux license is the problem, it you want the pictures to be yours and do what you want with them
@punkbuzter340 I kept all my images I've generated since 2023. It occupies about as much disc space as this model itself and 24 terabytes is sub 300 usd now.
I've done ALOT of sdxl, and I think flux is better, even with the time spent. Denoising on flux is excellent, and so the if you want creative images, you can use something else to create the image, and then denoise with flux. Hidream doesn't denoise well.
The only true argument is proprietary. If hidream isn't Mit or Apache 2.0, then it's worst than flux imo. Besides, chroma is here.
@andux Is hidream proprietary?
@MagicalErotica for spped use DEV https://civitai.com/models/1534280?modelVersionId=1736045 and hidream is open, with permissive license, that's why is different
Waiting for the current size model, I like many other people have only 12 gb. A good solution would be if they split the next version and make 3 smaller models. . Anime, Graphics and animation, Realism.
Just use SDXL. It's way faster, and uncensored. Especially with all the improved text-encoders by Felldude.
This looks promising. Which version would work best with Forge/a1111 and NSFW.
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
