UPDATE!
do a git pull in the Davinci_Magihuman custom nodes folder or reinstall the nodes and it will update the nodes. USE THE NEW WORKFLOW!
git pull
python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI_MagiHuman_fp8_ditFIX_nodes\requirements.txt
CUSTOM NODE FORK:
https://github.com/RealRebelAI/ComfyUI_MagiHuman_fp8_ditFIX_nodes/tree/main
i recoded some nodes and added a new unload model node. this lower the vram from model to sr. i just simply cant run the fp8. ill continue to update the nodes based on errors from community or when gguf comes.
PLEASE post your outputs if it works lol
-
-
-
-
-
-
-
VERY HIGH VRAM-RAM REQUIREMENTS. 24-30gb MINIMUM combined compute.
EXPERIMENTAL! READ THE ENTIRE DESCRIPTION! The workflow is not finished until further notice. Having custom node complications. Until this description changes, expect this workflow not to work. When you see this page updated, its working ❤️
FILES:
https://huggingface.co/realrebelai/DaVinci_MagiHuman_fp8_merges/tree/main
t5 gemma text encoder goes in "gguf" folder NOT text encoder folder!
REQUIRES MY CUSTOM NODE FORK:
https://github.com/RealRebelAI/ComfyUI_MagiHuman_fp8_ditFIX_nodes/tree/main
these custom nodes are custom fitted to run the fp8 model and fp8 sr model. they are required to run this workflow specifically. the models will not run on any other custom nodes but mine as they are CODED specifically for my model. THEY ARE EXPERIEMENTAL AND I AM UPDATING AS I GET ISSUES. please put in an issue request ON GITHUB if you run into problems. you can also git pull the nodes frequently as im currently updating them during testing. just adjusting offloading to possibly fit the models on 24gb or less of compute.
step counts:
DISTILLED - 8 steps
Description
fp8 distilled model