CivArchive
    HiDream I1 FULL-DEV-FAST [NF4] - FAST
    Preview 69325608

    Source: https://github.com/hykilpikonna/HiDream-I1-nf4 from hykilpikonna

    GGUF versions: CLICK ME!

    Installation guide from razzz here (Also a better location to discuss issues and errors, thank you!)
    Installation tutorial from AiSearch with flash attention!

    This is a reupload! I am not the original author! Have fun with the less VRAM hungry NF4 version!
    You also need at least 16GB VRAM to run any of the models!

    💪Train your own model: https://runpod.io?ref=gased9mt
    🍺 Join my discord: https://discord.com/invite/pAz4Bt3rqb

    Description

    FAQ

    Comments (96)

    BetterThanNothingApr 11, 2025· 2 reactions
    CivitAI

    Thanks!!

    kwResearchApr 11, 2025
    CivitAI

    Do I have to download meta llama 3.1 if run locally?

    SLACK69Apr 11, 2025· 2 reactions

    theres a comfyUI workflow

    SLACK69Apr 11, 2025
    CivitAI

    is there any way i can run this on a 6gb card? if not how long will i have to wait

    RalFinger
    Author
    Apr 11, 2025

    No, you can´t run this on 6GB. You need at least 16GB.

    SLACK69Apr 11, 2025

    @RalFinger any idea when they might drop a more budget friendly version of the model?

    RalFinger
    Author
    Apr 11, 2025

    @SLACK69 lets wait for the GGUF versions, but then also only the Q8 model is on par with these nf4 versions

    89184840795s492Apr 11, 2025· 1 reaction
    CivitAI
    how to run nf4 version? what nodes are needed?
    RalFinger
    Author
    Apr 11, 2025· 4 reactions

    Read the model description

    fmodApr 11, 2025· 1 reaction

    That is a valid question that "Read the model description" does not answer

    RalFinger
    Author
    Apr 11, 2025

    @fmod you need the loader, which is the first link below the original source. How hard is that? There is also an example workflow.

    fmodApr 11, 2025

    I assume you are refering to the "sampler". That is loading the model from huggingface. It is not currently using safetensors, if I am not mistaken. If I am mistaken, please do tell in which folder should the safetensor files be placed so that they are loaded by the example workflow

    miki1882Apr 11, 2025

    @fmod I would also like to know that. i downloaded the model and can't use it because the node downloads another different model from huggingface instead.

    fmodApr 11, 2025

    @miki1882 According to the discussion on the github page (https://github.com/lum3on/comfyui_HiDream-Sampler/issues/40) regarding the safetensor files: "Unfortunately, there is no model input option yet to connect them with the model loader node." So, it seems that at the moment those nodes are not compatible with the safetensor files.

    miki1882Apr 12, 2025

    @fmod after i ran this "pip3 install --upgrade --force-reinstall auto-gptq datasets accelerate optimum bitsandbytes" i could see nf4 options in the selection. this doesnt allow you to use the model you downloaded here but its still the nf4 version

    RalFinger
    Author
    Apr 12, 2025

    @miki1882 they are working on local model support for the comfy node.

    89184840795s492Apr 12, 2025

    @miki1882 and where should it be launched, in what folder?

    VortexVisionApr 11, 2025· 1 reaction
    CivitAI

    I hope you have something for 8 vram

    RalFinger
    Author
    Apr 11, 2025· 1 reaction

    sadly, nothing yet

    EvivrusApr 11, 2025· 8 reactions
    CivitAI

    So I had a bunch of trouble getting the Comfy UI custom node working in its current state. What worked for me:

    1. Use Python 3.11.9
    2. Install Torch 2.6 + CUDA 12.6
    3. If using Windows Install triton-windows

    If you are struggling to make it work I hope this helps.

    dailydoseofaiartApr 11, 2025
    CivitAI

    This is still downloading for me, but while I'm waiting, I just did the tutorial by AI search and followed it exactly but in the end he used the workflow (which is different than the one they have on the page now) and used the "fast" model to generate an image. He said it had to download for a while, he said 30 minutes. I wanted to use the dev nf4 version and thusly chose the dev thing in the hidream sampler in the hopes that i would use the nf4 version for it for some reason, but of course it downloaded for 2 hours and it was the non-quantized model. The workflow initially has the dev-nf4 model pre-selected but if I run it, it doesn't download it, it just gives me an error saying that it's not in the "list" or whatever. Trying to download the nf4 model from huggingface was confusing because it was this big repository where I didn't know what I need to download, perhaps the whole thing but that seems weird because I was expecting a single safetensors file and even if I just cloned the whole thing I wouldn't know where so that it shows up in the model_type input thing on the hidream sampler node.

    Sorry for my incompetence, I know absolutely nothing and am very dependent on exact instructions. If anyone can decipher my problem from my ramblings or knows that it's just gonna show up once I have downloaded this checkpoint, I would appreciate your answer

    Update: it does not show up, at least not if I put it in the checkpoints folder

    RalFinger
    Author
    Apr 11, 2025· 1 reaction

    Try the following (https://github.com/lum3on/comfyui_HiDream-Sampler):
    git pull
    pip install -r requirements.txt

    dailydoseofaiartApr 11, 2025

    @RalFinger thank you, sadly that didn't quite solve my problem...

    I now did both of those things in the HiDream-Sampler folder and they did do something, but I still can't choose any other model. And in general I don't know where in the workflow to put the checkpoint

    This is the error I'm talking about:

    Failed to validate prompt for output 17:

    * HiDreamSampler 7:

    - Value not in list: model_type: 'dev-nf4' not in ['full', 'dev', 'fast']

    dev-nf4 is pre-selected, I could change it but so far that has only downloaded the full models

    It loads up saying "NF4: False, Requires BNB: True, Requires GPTQ deps: False" if that means anything.

    RalFinger
    Author
    Apr 11, 2025· 2 reactions

    @Radyschen sorry to hear that. Would you please open up a new Issue on the git page? I guess you will have more luck there compared to the comment section on this model for now. Would appreciate a reply if you figured out your issue. You can also join my Discord, the author of the comfynode is also part of the community (its a german/english community). Good luck!

    dailydoseofaiartApr 11, 2025

    @RalFinger Oh, I'm German too. But for the sake of the conversation, I will keep it english. I would love to join the discord, just one more thing, how does using the nf4 models work for you? Do you select it in the node?

    RalFinger
    Author
    Apr 11, 2025· 2 reactions

    @Radyschen I didn´t even test it yet, but others on the discord did, just hop on and take it from there. Würde mich freuen ☺

    AIDreamingApr 11, 2025

    @RalFinger Should I go to the custom nodes folder and "CMD" and then pip install? If so I'm getting an error and I'm not sure why in the path there's an "a" folder, I do not have one... Standalone ComfyUI install.

    I'm also only seeing the non-nf4 models in the nodes while loading the workflow the node has a "cached" nf4 ref but after trying to run only complaints about the missing entry.

    D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_HiDream-Sampler>pip install -r requirements.txt

    Fatal error in launcher: Unable to create process using '"D:\a\ComfyUI\python_embeded\python.exe" "D:\ComfyUI_windows_portable\python_embeded\Scripts\pip.exe" install -r requirements.txt': The system cannot find the file specified.

    RalFinger
    Author
    Apr 11, 2025

    @aaltomar381 just clone it again with this command: git clone https://github.com/lum3on/comfyui_HiDream-Sampler ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_HiDream-Sampler and then the requirements.txt should be there

    tracedInAirApr 13, 2025

    here's what I changed in the hidreamsampler python file after cloning the folder with the safetensor file, config.json, etc from huggingface -

    hidreamsampler.py

    fast_nf4_name = "F:/Downloads/Apps/Stable/common/models/HiDream-I1-Fast-nf4"

    "fast-nf4": {

    "path": fast_nf4_name,

    "guidance_scale": 0.0, "num_inference_steps": 16, "shift": 3.0,

    "scheduler_class": "FlashFlowMatchEulerDiscreteScheduler",

    "is_nf4": True, "is_fp8": False, "requires_bnb": False, "requires_gptq_deps": True

    },

    then you just select fast-nf4 in the comfyui workflow to use whatever you downloaded.

    NoNameNeedApr 11, 2025· 10 reactions
    CivitAI

    Lets see some NSFW image from this model without any loras

    DefprofApr 22, 2025

    Works fine for me

    CatzApr 11, 2025· 9 reactions
    CivitAI

    Apparently HiDream is even better than Flux Pro (the paid version) and is uncensored from the beginning. This is going to be interesting to see it overcome Flux - The king of images for a long time!

    bobby888Apr 11, 2025· 2 reactions

    according to he leader board and benchmarks, it is not better than Flux pro, but close.

    zanvanserApr 11, 2025· 3 reactions

    it will better when in run on 8gb vram, currently flux better cuz can ez do it

    kevenggg868Apr 11, 2025· 6 reactions

    Hidream isn't distilled, so if it's easy to train(let's hopee it is), it will probably probably take over Flux's throne.

    MegaMitsuApr 12, 2025

    People actually pay for models? Also, what makes Flux Pro worth paying in contrast to just flux in general? Is it just more trained data?

    CatzApr 12, 2025

    @MegaMitsu It has better results than the Dev version, commercial usage and can only be generated on their online platform (good for people without good hardware for local). Dev version has access to loras from the community, which drastically increase the quality.

    bobby888Apr 17, 2025· 1 reaction

    @Catz Oh wow. I have not seen that it overtook it. Thanks for the info.

    blinkdotlehApr 11, 2025
    CivitAI

    Just upgraded to the Blackwell and it looks like its not supported yet.. damn

    RalFinger
    Author
    Apr 11, 2025

    @ChronoKnight is looking at that right now. We also talk about that on my discord if you want to join.

    ChronoKnightApr 11, 2025

    @RalFinger i won't use the nf4 tho. maybe later but rn im doing just a regular new comfy install to not mess with my triton and sage attn comfy setup.

    GradashoApr 11, 2025

    @RalFinger would love to see an FP4 model but my understanding is that support isn't out for FP4 anywhere

    razzzApr 11, 2025· 8 reactions
    CivitAI

    I made an helping guide for those interested.

    https://civitai.com/articles/13536

    RalFinger
    Author
    Apr 11, 2025

    Thank you @razzz! I will link your article to the model page if you are ok with it?

    razzzApr 11, 2025· 1 reaction

    @RalFinger Sure. The more visibility, the better.

    FidregoreJul 20, 2025

    Appears to be taken down now, and I can't find any other guides to get this set up

    IGLXX47Apr 11, 2025· 8 reactions
    CivitAI

    how to use this safetensor file

    GitarooManApr 12, 2025· 5 reactions
    CivitAI

    for everyone who is lost - it goes in the CHECKPOINTS folder

    zczcgApr 12, 2025

    You mean I need put the model to checkpoint folder

    gman_umschtApr 14, 2025· 2 reactions

    That's right - it goes into the square hole

    bananaboyAVApr 12, 2025· 12 reactions
    CivitAI

    As for NSFW content, it's knowledge of anatomy is better than Flux, but it's still difficult to get good results consistently.

    totesApr 12, 2025· 3 reactions
    CivitAI

    I'm not quite clear on how you use it in the checkpoints folder if the install instructions the default basically I think you have downloads it automatically. Can you clear that up for me please?

    RalFinger
    Author
    Apr 12, 2025· 3 reactions

    Hey totes, it seems that the comfy node also downloads the models direclty from HF. This way you don´t need to download the NF4 models for now. There will be a loader (we hope directly from comfyui) soon.

    totesApr 12, 2025

    @RalFinger Thanks tons.

    steezo_jonesApr 14, 2025

    I am actively downloading the models and modifying the hidreamsampler,py to NOT auto-download because ComfyUI keeps force closing during the 'git clone https://hugginface...' operation. I want to see what happens if I skip this step.

    totesApr 16, 2025

    @steezo_jones if you don't have directions on how to do that I'd completely understand, but do you have instructions on where to get that sort of knowhow? And also I'd understand that as well. Smart move by the way.

    steezo_jonesApr 17, 2025· 1 reaction

    @totes I didn't have git lfs within my ComfyUI venv. For some reason I was getting no information why ComfyUI would just return to prompt. I am reading stuff on reddit, watching youtube videos, and using ChatGPT to learn everything I can about AI image, video, audio generation. Not sure why. I don't get paid for it.

    totesApr 17, 2025

    @steezo_jones Sounds good. I just thought there might be some info on modding files the particular locations but hey, we do what we can. I'm in the same boat. Thanks for replying!

    steezo_jonesApr 18, 2025

    @totes Hey, did you get HiDream working yet? "Benji's AI Playground" channel on youtube just posted a video explaining how to get this running from the .safetensors files within ComfyUI.

    totesApr 19, 2025· 1 reaction

    @steezo_jones @steezo_jones Oh yeah I got it working thanks to various YouTube videos, and really thanks for the tip!

    reddotstApr 12, 2025
    CivitAI

    Will this work on mac?

    RalFinger
    Author
    Apr 12, 2025· 2 reactions

    I don´t know, did you try?

    I am trying lets see

    kunde2Apr 12, 2025

    I heard it only works with cuda

    ggkthApr 12, 2025· 4 reactions
    CivitAI

    if you got error, don't forget 'Edit the system environment variables' and 'path editing' !

    StraitjacketApr 13, 2025· 6 reactions
    CivitAI

    How likely are we to get Finetunes of this? Is it difficult or expensive to train?

    WonderGuard_SpiritombApr 13, 2025· 6 reactions
    CivitAI

    I'll wait for more quantised version

    RalFinger
    Author
    Apr 15, 2025· 1 reaction

    Uploading the GGUF versions right now.

    condzero1950Apr 16, 2025

    Why wait when you can do it yourself. The more you know the better off you are. Quantizers are your friend. Get to know how to use them.

    paper_butterlyApr 15, 2025· 47 reactions
    CivitAI

    Pull this crap down, it does absolutely nothing and you know it. People are wasting their time just so you can get a few eyes on you. You won't have mine since I'm blocking you.

    _degenerativeai_Apr 18, 2025· 7 reactions

    Go be a weirdo somewhere else.

    MegazardApr 25, 2025· 4 reactions

    who hurt you

    Rule34DiffusionJul 13, 2025

    Too late to say that, They did get your eyes on them, because you wasted your time typing this out. xD

    JerryOverApr 16, 2025· 4 reactions
    CivitAI

    Stable Diffusion models still know what makes a woman beautiful. Even Flux got it wrong.

    totesApr 16, 2025
    CivitAI

    I'm having trouble getting this to work on Comfy. Any tips? I have one regarding the wrapper vs Comfy. So far, if cached the wrapper is better for speed and the same results, BUT if not Comfy is better for loading the model but the actual producing the image is not. It is nice however to build your workflow as you see it as long as they are Comfy Core essentially as of now AND relocate your files. So this all depends on you. Just little tips or whatever.

    RalFinger
    Author
    Apr 21, 2025· 1 reaction

    Hey totes, not sure If you use discord, but I would love to invite you to my server (https://discord.gg/g5Pb8qNUuP) That way communication is way easiert, and way more people.

    totesApr 22, 2025

    @RalFinger thanks for the invite and in fact am on Discord, but the link expired or is invalid.

    RalFinger
    Author
    Apr 22, 2025· 1 reaction

    @totes oh wow, my main invite link ... is just ... gone? Here is a new one: https://discord.com/invite/pAz4Bt3rqb

    amida_dApr 17, 2025· 2 reactions
    CivitAI

    Love live Flux chin! jk. If you have problems with triton and sageattention on Windows, please see this post: https://www.kombitz.com/2025/02/20/how-to-install-triton-on-windows/

    RalFinger
    Author
    Apr 21, 2025

    thanks for sharing, came across this article too while googling for triton installation 😀

    Sk0rpzApr 23, 2025· 1 reaction
    CivitAI

    Can I run this Checkpoint with my RTX 3070 8gbVRAM ??
    If afirmative, can I use this checkpoint in ForgeUI?
    Thanks in advance.

    RalFinger
    Author
    Apr 24, 2025· 1 reaction

    Forge ist not yet updated to handle HiDream models

    Sk0rpzApr 24, 2025· 2 reactions

    @RalFinger oh damn! Thanks for the answer I was about to try it later, but now you saved my time trying hehe

    aaaa_aApr 24, 2025
    CivitAI

    Please tell me that someone has plans to finetune it

    DerienApr 25, 2025· 14 reactions
    CivitAI

    once again this is only for NVIDIA video card users, distressing market policy....

    NabbyMay 1, 2025· 10 reactions

    They're not trying to market anything. These are scientists trying to move the science forward and home enthusiasts trying to turn it into something people can use at home. If you want to compute on NVIDIA, you use CUDA. If you want to compute on AMD you use something like ROCm. People code for what they have - and is it any surprise that the people passionate enough about this to do that kind of hard work would have the most powerful GPU's you can get - which always happen to be NVIDIA cards?

    cutetodeath78409597May 3, 2025· 7 reactions

    nvidia pretty much owns the hobbyist ai market because no one has come up with an alternative CUDA. not the model's fault

    AIArtsChannelMay 7, 2025· 7 reactions

    What are you talking about? Works just fine on ROCm too

    AIArtsChannelMay 7, 2025· 8 reactions

    @cutetodeath78409597 There's ROCm on the AMD side, and HiDream works just fine on it, just like StableDiffusion or Flux.
    Do not spread misinformation.

    MeMakeStuffMay 24, 2025· 1 reaction

    ComfyUI-ZLuda, start using it.

    gurusarrasOct 22, 2025· 1 reaction

    @Nabby Amd's workstation cards and their cards at the same price in consumer range are actually better at compute.The problem is Cuda not their Gpu power.Also Amd doesn't even have a high end competitor in the 9000 series even though the 6950 was better than rtx 3090 in raw power 3090 obliterates it in work related stuff because of Cuda.Rx 580 for instance had similar compute performance to the gtx 1080(A card it wasn't even close to competing with when it comes to price or game performance).One of the biggest reasons for 570/580 s being a gem for miners was their amazing compute power for their price but that was Cdna.Rumors have it Amd is going to merge their AI and gaming chips with Rdna 5(Udna)but time will tell hopefully happens alongside with some improvements in Vulkan(For AI) instead of nonsense Rocm which doesn't work half the time.Zluda,Rocm,Hip,Scale so many Cuda ''killers'' yet it's too much.I wish Vulkan had better perf than Rocm or Cuda when it comes to AI.Enough with this frameworks.

    maraconMay 8, 2025
    CivitAI

    This model completely crashes my PC with 3090Ti.
    I've tried 2 installations with PyTorch2.8+Triton and latest 2.7 w/o Triton.
    @RalFinger Is there any specific workflow which is not crashing? Or other requirements?

    NitricAcid09Sep 28, 2025

    You'll have to double check your requirements. Also, did you try re-downloading? I've gotten corrupted models on rare occasion, the solution is simply to redownload. If you have an app that can generate hashes, you can use the hashes displayed on the model page to make sure your copy isn't corrupted. If you use the CRC32 hash, Civitai doesn't display it correctly, it moves the first character or something of the has to the end, making it look like you got a corrupted download every time, but if you know this discrepancy, then you can still use the CRC32 hash.

    bigdog11Dec 27, 2025
    CivitAI

    can some one answer me pls, does this work on forge or swarm?

    pink0909Apr 29, 2026

    i had trouble with forge keeping up with new models, so i bit the bullet and installed comfy (very easy with stability matrix, one click automatic) then i was suprised it wasn't so difficult as i thought, yes looks messy but there is a libary where you just pick a premade workflow and it loads everything automatically, if some special things isn'T there you can google the workflow (for example a special gguf version)

    Checkpoint
    HiDream

    Details

    Downloads
    848
    Platform
    CivitAI
    Platform Status
    Available
    Created
    4/11/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    hidreamI1FULLDEVFAST_fast.safetensors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.