Full Checkpoint with improved TE do not load additional CLIP/TE
SD 3.5 Medium + Google FLAN
Update your Comfy UI before using(10/29/2024)
FP16 merge with Google FLAN (From FP32)
SGM Uniform with Dpmpp_2M at 40 Steps or Euler Normal/Simple at 20 steps working well
Do not use negatives above 0.2 timestamp - If you do not understand this line load any image as a workflow. (The same instructions as base SD 3.5)
I posted a FP8 version but even with a 8GB card I do not see an improvement in speed so I still recommend the FP16 (NF4 support in Forge would be great)
In my testing on a low VRAM card this model runs several times faster then the Large Model
Per the Apache 2.0 license FLAN is attributed to Google
Description
FAQ
Comments (13)
can this be run on forge?
Amazing speed, thank you,
If I want to try a larger model, should I open COMFYUI's collinear memory and have the model swap between memory or CPU and memory? (For example, I want to try a 25GB model with 12/16GB memory)
Support for NF4 is available on several UIs; on Forge it'll come in time, so might as well release a NF4 version?
Sorry if I'm dumb. SO I just load the checkpoint, no Clip L, no CLip G, no T5?
Or I just load Clip L and G without loading the T5 since you're using Flan?
Thanks for the model!
What is FLAN exactly? How is it better than the t5xxl, i, and g clips?
ps, thank you for your work!
How do you manage to even load the Flan models in Comfy to make these checkpoints? I've tried myself before and it always gives an error if the encoder model isn't actually non-Flan t5-v1.1-xxl.
Is there control net , upscalers , Loras and Ip adapter for this model ?
Works like a charm!
i have 16 gram and 6 vram witch one is better
runs on a gtx 1060! with 6gb vram without crashing, performance is comparable to SDXL which is awesome.
RuntimeError: Promotion for Float8 Types is not supported, attempted to promote Half and Float8_e4m3fn
The Text encoding takes an unusual long time for me with this checkpopint. Why could that be?