My TG Channel - https://t.me/StefanFalkokAI
My TG Chat - https://t.me/+y4R5JybDZcFjMjFi
Hi! I introduce my working workflows with Flux.2 D generating image for ComfyUI.
I have included workflow with Flux.2 GGUF Model
You need to have Flux 2.D GGUF Model (https://huggingface.co/city96/FLUX.2-dev-gguf/tree/main), clip (https://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/text_encoders) and vae (https://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/vae)
If you need FP8 model - https://huggingface.co/Comfy-Org/flux2-dev/tree/main/split_files/diffusion_models
If you need GGUF Encoders version - https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF/tree/main
download my ComfyUI Build https://huggingface.co/datasets/StefanFalkok/ComfyUI_portable_torch_2.10.0_cu130_cp313_sageattention_triton/tree/main, also u need download and install CUDA 13.0 (https://developer.nvidia.com/cuda-13-0-0-download-archive) and VS Code (https://visualstudio.microsoft.com/downloads/)
Leave comments if you have trouble or you found the problem with workflows
Description
in nightly latest ComfyUI Version you'll get error like Too many values to unpack (expected 4)
I put fixed distorch2.py file. Replace this file in this path ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-MultiGPU
That's it. MultiGPU works correctly
ClipLoaderDisTorch2MultiGPU doesn't work, so I changed to ClipLoaderMultiGPU