Source: https://github.com/hykilpikonna/HiDream-I1-nf4 from hykilpikonna
GGUF versions: CLICK ME!
Installation guide from razzz here (Also a better location to discuss issues and errors, thank you!)
Installation tutorial from AiSearch with flash attention!
This is a reupload! I am not the original author! Have fun with the less VRAM hungry NF4 version!
You also need at least 16GB VRAM to run any of the models!
💪Train your own model: https://runpod.io?ref=gased9mt
🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb
Description
FAQ
Comments (96)
Thanks!!
Do I have to download meta llama 3.1 if run locally?
theres a comfyUI workflow
is there any way i can run this on a 6gb card? if not how long will i have to wait
No, you can´t run this on 6GB. You need at least 16GB.
@RalFinger any idea when they might drop a more budget friendly version of the model?
@SLACK69 lets wait for the GGUF versions, but then also only the Q8 model is on par with these nf4 versions
Read the model description
That is a valid question that "Read the model description" does not answer
@fmod you need the loader, which is the first link below the original source. How hard is that? There is also an example workflow.
I assume you are refering to the "sampler". That is loading the model from huggingface. It is not currently using safetensors, if I am not mistaken. If I am mistaken, please do tell in which folder should the safetensor files be placed so that they are loaded by the example workflow
@fmod I would also like to know that. i downloaded the model and can't use it because the node downloads another different model from huggingface instead.
@miki1882 According to the discussion on the github page (https://github.com/lum3on/comfyui_HiDream-Sampler/issues/40) regarding the safetensor files: "Unfortunately, there is no model input option yet to connect them with the model loader node." So, it seems that at the moment those nodes are not compatible with the safetensor files.
@fmod after i ran this "pip3 install --upgrade --force-reinstall auto-gptq datasets accelerate optimum bitsandbytes" i could see nf4 options in the selection. this doesnt allow you to use the model you downloaded here but its still the nf4 version
@miki1882 they are working on local model support for the comfy node.
@miki1882 and where should it be launched, in what folder?
I hope you have something for 8 vram
sadly, nothing yet
So I had a bunch of trouble getting the Comfy UI custom node working in its current state. What worked for me:
1. Use Python 3.11.9
2. Install Torch 2.6 + CUDA 12.6
3. If using Windows Install triton-windows
If you are struggling to make it work I hope this helps.
This is still downloading for me, but while I'm waiting, I just did the tutorial by AI search and followed it exactly but in the end he used the workflow (which is different than the one they have on the page now) and used the "fast" model to generate an image. He said it had to download for a while, he said 30 minutes. I wanted to use the dev nf4 version and thusly chose the dev thing in the hidream sampler in the hopes that i would use the nf4 version for it for some reason, but of course it downloaded for 2 hours and it was the non-quantized model. The workflow initially has the dev-nf4 model pre-selected but if I run it, it doesn't download it, it just gives me an error saying that it's not in the "list" or whatever. Trying to download the nf4 model from huggingface was confusing because it was this big repository where I didn't know what I need to download, perhaps the whole thing but that seems weird because I was expecting a single safetensors file and even if I just cloned the whole thing I wouldn't know where so that it shows up in the model_type input thing on the hidream sampler node.
Sorry for my incompetence, I know absolutely nothing and am very dependent on exact instructions. If anyone can decipher my problem from my ramblings or knows that it's just gonna show up once I have downloaded this checkpoint, I would appreciate your answer
Update: it does not show up, at least not if I put it in the checkpoints folder
Try the following (https://github.com/lum3on/comfyui_HiDream-Sampler):
git pull
pip install -r requirements.txt
@RalFinger thank you, sadly that didn't quite solve my problem...
I now did both of those things in the HiDream-Sampler folder and they did do something, but I still can't choose any other model. And in general I don't know where in the workflow to put the checkpoint
This is the error I'm talking about:
Failed to validate prompt for output 17:
* HiDreamSampler 7:
- Value not in list: model_type: 'dev-nf4' not in ['full', 'dev', 'fast']
dev-nf4 is pre-selected, I could change it but so far that has only downloaded the full models
It loads up saying "NF4: False, Requires BNB: True, Requires GPTQ deps: False" if that means anything.
@Radyschen sorry to hear that. Would you please open up a new Issue on the git page? I guess you will have more luck there compared to the comment section on this model for now. Would appreciate a reply if you figured out your issue. You can also join my Discord, the author of the comfynode is also part of the community (its a german/english community). Good luck!
@RalFinger Oh, I'm German too. But for the sake of the conversation, I will keep it english. I would love to join the discord, just one more thing, how does using the nf4 models work for you? Do you select it in the node?
@Radyschen I didn´t even test it yet, but others on the discord did, just hop on and take it from there. Würde mich freuen ☺
@RalFinger Should I go to the custom nodes folder and "CMD" and then pip install? If so I'm getting an error and I'm not sure why in the path there's an "a" folder, I do not have one... Standalone ComfyUI install.
I'm also only seeing the non-nf4 models in the nodes while loading the workflow the node has a "cached" nf4 ref but after trying to run only complaints about the missing entry.
D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_HiDream-Sampler>pip install -r requirements.txt
Fatal error in launcher: Unable to create process using '"D:\a\ComfyUI\python_embeded\python.exe" "D:\ComfyUI_windows_portable\python_embeded\Scripts\pip.exe" install -r requirements.txt': The system cannot find the file specified.
@aaltomar381 just clone it again with this command: git clone https://github.com/lum3on/comfyui_HiDream-Sampler ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_HiDream-Sampler and then the requirements.txt should be there
here's what I changed in the hidreamsampler python file after cloning the folder with the safetensor file, config.json, etc from huggingface -
hidreamsampler.py
fast_nf4_name = "F:/Downloads/Apps/Stable/common/models/HiDream-I1-Fast-nf4"
"fast-nf4": {
"path": fast_nf4_name,
"guidance_scale": 0.0, "num_inference_steps": 16, "shift": 3.0,
"scheduler_class": "FlashFlowMatchEulerDiscreteScheduler",
"is_nf4": True, "is_fp8": False, "requires_bnb": False, "requires_gptq_deps": True
},
then you just select fast-nf4 in the comfyui workflow to use whatever you downloaded.
Lets see some NSFW image from this model without any loras
Works fine for me
Apparently HiDream is even better than Flux Pro (the paid version) and is uncensored from the beginning. This is going to be interesting to see it overcome Flux - The king of images for a long time!
according to he leader board and benchmarks, it is not better than Flux pro, but close.
@bobby888 Hmm in the Arena it shows that it's higher
https://artificialanalysis.ai/text-to-image/arena?tab=Leaderboard
Full detail
https://artificialanalysis.ai/text-to-image/model-family/hidream
it will better when in run on 8gb vram, currently flux better cuz can ez do it
Hidream isn't distilled, so if it's easy to train(let's hopee it is), it will probably probably take over Flux's throne.
People actually pay for models? Also, what makes Flux Pro worth paying in contrast to just flux in general? Is it just more trained data?
@MegaMitsu It has better results than the Dev version, commercial usage and can only be generated on their online platform (good for people without good hardware for local). Dev version has access to loras from the community, which drastically increase the quality.
@Catz Oh wow. I have not seen that it overtook it. Thanks for the info.
Just upgraded to the Blackwell and it looks like its not supported yet.. damn
@ChronoKnight is looking at that right now. We also talk about that on my discord if you want to join.
@RalFinger i won't use the nf4 tho. maybe later but rn im doing just a regular new comfy install to not mess with my triton and sage attn comfy setup.
@RalFinger would love to see an FP4 model but my understanding is that support isn't out for FP4 anywhere
I made an helping guide for those interested.
https://civitai.com/articles/13536
how to use this safetensor file
for everyone who is lost - it goes in the CHECKPOINTS folder
You mean I need put the model to checkpoint folder
That's right - it goes into the square hole
As for NSFW content, it's knowledge of anatomy is better than Flux, but it's still difficult to get good results consistently.
I'm not quite clear on how you use it in the checkpoints folder if the install instructions the default basically I think you have downloads it automatically. Can you clear that up for me please?
Hey totes, it seems that the comfy node also downloads the models direclty from HF. This way you don´t need to download the NF4 models for now. There will be a loader (we hope directly from comfyui) soon.
@RalFinger Thanks tons.
I am actively downloading the models and modifying the hidreamsampler,py to NOT auto-download because ComfyUI keeps force closing during the 'git clone https://hugginface...' operation. I want to see what happens if I skip this step.
@steezo_jones if you don't have directions on how to do that I'd completely understand, but do you have instructions on where to get that sort of knowhow? And also I'd understand that as well. Smart move by the way.
@totes I didn't have git lfs within my ComfyUI venv. For some reason I was getting no information why ComfyUI would just return to prompt. I am reading stuff on reddit, watching youtube videos, and using ChatGPT to learn everything I can about AI image, video, audio generation. Not sure why. I don't get paid for it.
@steezo_jones Sounds good. I just thought there might be some info on modding files the particular locations but hey, we do what we can. I'm in the same boat. Thanks for replying!
@totes Hey, did you get HiDream working yet? "Benji's AI Playground" channel on youtube just posted a video explaining how to get this running from the .safetensors files within ComfyUI.
@steezo_jones @steezo_jones Oh yeah I got it working thanks to various YouTube videos, and really thanks for the tip!
Will this work on mac?
I don´t know, did you try?
I am trying lets see
I heard it only works with cuda
if you got error, don't forget 'Edit the system environment variables' and 'path editing' !
How likely are we to get Finetunes of this? Is it difficult or expensive to train?
I'll wait for more quantised version
Why wait when you can do it yourself. The more you know the better off you are. Quantizers are your friend. Get to know how to use them.
Pull this crap down, it does absolutely nothing and you know it. People are wasting their time just so you can get a few eyes on you. You won't have mine since I'm blocking you.
Go be a weirdo somewhere else.
who hurt you
Too late to say that, They did get your eyes on them, because you wasted your time typing this out. xD
Stable Diffusion models still know what makes a woman beautiful. Even Flux got it wrong.
I'm having trouble getting this to work on Comfy. Any tips? I have one regarding the wrapper vs Comfy. So far, if cached the wrapper is better for speed and the same results, BUT if not Comfy is better for loading the model but the actual producing the image is not. It is nice however to build your workflow as you see it as long as they are Comfy Core essentially as of now AND relocate your files. So this all depends on you. Just little tips or whatever.
Hey totes, not sure If you use discord, but I would love to invite you to my server (https://discord.gg/g5Pb8qNUuP) That way communication is way easiert, and way more people.
@RalFinger thanks for the invite and in fact am on Discord, but the link expired or is invalid.
@totes oh wow, my main invite link ... is just ... gone? Here is a new one: https://discord.com/invite/pAz4Bt3rqb
Love live Flux chin! jk. If you have problems with triton and sageattention on Windows, please see this post: https://www.kombitz.com/2025/02/20/how-to-install-triton-on-windows/
thanks for sharing, came across this article too while googling for triton installation 😀
Can I run this Checkpoint with my RTX 3070 8gbVRAM ??
If afirmative, can I use this checkpoint in ForgeUI?
Thanks in advance.
Please tell me that someone has plans to finetune it
once again this is only for NVIDIA video card users, distressing market policy....
They're not trying to market anything. These are scientists trying to move the science forward and home enthusiasts trying to turn it into something people can use at home. If you want to compute on NVIDIA, you use CUDA. If you want to compute on AMD you use something like ROCm. People code for what they have - and is it any surprise that the people passionate enough about this to do that kind of hard work would have the most powerful GPU's you can get - which always happen to be NVIDIA cards?
nvidia pretty much owns the hobbyist ai market because no one has come up with an alternative CUDA. not the model's fault
What are you talking about? Works just fine on ROCm too
@cutetodeath78409597 There's ROCm on the AMD side, and HiDream works just fine on it, just like StableDiffusion or Flux.
Do not spread misinformation.
ComfyUI-ZLuda, start using it.
@Nabby Amd's workstation cards and their cards at the same price in consumer range are actually better at compute.The problem is Cuda not their Gpu power.Also Amd doesn't even have a high end competitor in the 9000 series even though the 6950 was better than rtx 3090 in raw power 3090 obliterates it in work related stuff because of Cuda.Rx 580 for instance had similar compute performance to the gtx 1080(A card it wasn't even close to competing with when it comes to price or game performance).One of the biggest reasons for 570/580 s being a gem for miners was their amazing compute power for their price but that was Cdna.Rumors have it Amd is going to merge their AI and gaming chips with Rdna 5(Udna)but time will tell hopefully happens alongside with some improvements in Vulkan(For AI) instead of nonsense Rocm which doesn't work half the time.Zluda,Rocm,Hip,Scale so many Cuda ''killers'' yet it's too much.I wish Vulkan had better perf than Rocm or Cuda when it comes to AI.Enough with this frameworks.
This model completely crashes my PC with 3090Ti.
I've tried 2 installations with PyTorch2.8+Triton and latest 2.7 w/o Triton.
@RalFinger Is there any specific workflow which is not crashing? Or other requirements?
You'll have to double check your requirements. Also, did you try re-downloading? I've gotten corrupted models on rare occasion, the solution is simply to redownload. If you have an app that can generate hashes, you can use the hashes displayed on the model page to make sure your copy isn't corrupted. If you use the CRC32 hash, Civitai doesn't display it correctly, it moves the first character or something of the has to the end, making it look like you got a corrupted download every time, but if you know this discrepancy, then you can still use the CRC32 hash.
can some one answer me pls, does this work on forge or swarm?
i had trouble with forge keeping up with new models, so i bit the bullet and installed comfy (very easy with stability matrix, one click automatic) then i was suprised it wasn't so difficult as i thought, yes looks messy but there is a libary where you just pick a premade workflow and it loads everything automatically, if some special things isn'T there you can google the workflow (for example a special gguf version)
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
