Quantized Refiners work amazing on SDXL and FLUX
Note I used the 32bit CLIP-G however I discovered the CLIP-G-Large FP32 and posted a pruned version of it
The refiner model uses the single clip loader not the dual like SDXL base
Article with more detail here
You could build the FP32 UNET and CLIP-G into a checkpoint but it would be 12GB and I found this to work much faster even with CPU offloading.
Use with either 0.9 or 1.0 SDXL VAE
The green bottle has SDXL workflow - The bird, blue bottle and fox have FLUX workflow. They take advantage of BRSGAN 2x up-scaling but you can use any 2x model you wish.
My workflows are chaotic so if someone could clean them up with a nice pipeline that would be great.
We need a NSFW finetune of the FP32 Refiner
Refiners add amazing detail and could be easily incorporated to add tremendous detail to FLUX Schnell 4-8 step or Lightning 4-8 Step base images
Description
FAQ
Comments (20)
In practical terms what does this do exactly? Are we replacing clip_l.safetensors or the t5xxl file?
CLIP-G is used for SDXL (Dual Clip with CLIP-L), SD3.5 (Triple CLIP with T5xxl) and SDXL refiner (Single CLIP-G only)
You can not pair it with FLUX directly but you can use the SDXL refiner on FLUX images
@Felldude Hi, az420 is right. You've certainly put the finger on something interesting, but we can't understand how to use your 2 files in practical terms. In which node should we use your FP32 SDXL/FLUX Refiner ? Concerning CLIP-G, if I understand your answer, we have to do a second pass on the image with SDXL after a first pass creating the image with FLUX ?
@go_renc233 This is my understanding and when I get home I'll give it a try. I have a workflow that I can adapt. If I am thinking about it, once Flux finishes image generation you'd load up the SDXL Refiner using this checkpoint. I'm interested to see if it'll make much difference and I've been fiddling with Flux and (other models) to be refiner after generation. The results have been mixed at best. Depending on the subject I feel some can benefit from refiners and others not. Right now I'd say 60/40 being the 40 benefits.
Edit Update- Put it this way, the reason it works well in nature images is because there is a randomness that is expected. Humans and Buildings for example it starts to has issues.
Original FLUX render - https://kappa.lol/CBtLo.png
Using your Workflow and models - https://kappa.lol/H4xvz.png
The building is suffering DRASTICALLY from the upscale, it doesn't matter if it's model upscale or just straight image resizing scalling.
@go_renc233 Yes the clip-G is attached to the refiner SDXL model and can be used to enhance FLUX images which is contrary to most advice
@SencneS I have an article showing the amount of detail it can add even to FLUX images - It will not work with humans the refiner seems to be intentional broken for that fuction
@Felldude This is my findings as well. Also deliberately so.
what this would do in sdxl workflow ?
Its part of my FP32 model, running at FP32 for the CLIP seems to speed things up a bit as the CPU handles FP32 better - You would also have higher precision. CPU offloading wasn't a thing when SDXL first droped
@Felldude that's very ture - as my experience - i use CPU have mo gpu and from time ago i noticed simply running fp32 models are faster (seconds but it's faster) than fp16 or any.
about your model here i thought it can give better quality that's all , i'm searching for better quality from sdxl all time..
What is the difference between v1.0 and the versioned tagged as CLIP-G-Large-Pruned-FP32?
The v1.0 is 2.28GB and the pruned is 2.59 GB. I would expect the original file to be bigger and the pruned version to be smaller.
- Please explain the difference between v1 and pruned
- Please explain if you're also attaching/including the original as one of those 2 posted
1.0 is the gguf refiner model, the clip-g is used in the single clip loader
I'm trying the Clip G GGUF file with DualCliipLoader and it's not working for me, running the latest version of ComfyyUI I'm getting this error:
\ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 319, in load_data clip_data.append(gguf_clip_loader(p)) ^^^^^^^^^^^^^^^^^^^ \ComfyUI\custom_nodes\ComfyUI-GGUF\nodes.py", line 123, in gguf_clip_loader
assert "enc.blk.23.ffn_up.weight" in raw_sd, "Invalid Text Encoder!"
please advise
The refiner model is single clip, clip-g
@Felldude can you show us a workflow where I can use the single clip gguf loader? please and thank you
@azimuthalobserver Its gguf unet loader with the standard single clip loader, just load the images for the workflow
Trying SDXL workflow. How to load SDXL model? There is only Unet Loader node in the workflow. Should it be replaced with Checkpoint Loader node?
GGUF would use the GGUF node with the single CLIP-G
Where to put refiners in a1111 forge neo? thanks!!!
Im not sure on that
