Goddess Raw (Schnell)
Goddess Raw is a unique FLUX model that has realistic skin at very low steps. I even put my results up against the ULTRA model
NF4 FULL CHECKPOINT - DO NOT LOAD ADDITIONAL TE,CLIP,VAE
FP8 and GGUF Require additional TE and CLIP-L & VAE
FP8 Model load as Automatic or Default not FP8 as it is a mixed precision UNET BF16/FP8
No more identical faces
Focused on creating realistic looking images at 4-6 Steps
Some NSFW training no harder then full female nudity. (No sexually explicit training)
Forge Users I highly recommend adding this line to your USER.bat
set COMMANDLINE_ARGS= --unet-in-bf16 --vae-in-fp32 --cuda-malloc --clip-in-fp32
*Cuda-malloc only if using a Nvidia RTX
I have used prompts for testing based off of onsite remix, in some cases I feel this model out preformed the DEV Pro in prompt adherence and realism
Description
FAQ
Comments (7)
Interesting... any chance of a Q4 GGUF unet?
If it plays well with the T5 scaled given that seems to be the most popular
Done
@Felldude Interesting the quality is very good. I know this is silly, but can I get a Q8 GGUF? I know that the FP8 is right there. One thing I've been trying to test is the difference between FP8 and Q8 GGUF. It's difficult to see which one is better, and there is little home brew models that have both FP8 and Q8 GGUF.
I was came across this reddit thread - https://www.reddit.com/r/StableDiffusion/comments/1eso216/comparison_all_quants_we_have_so_far/
And it suggests that Q8 might produce very similar results to FP16, but they didn't really touch on FP8. In fact the only example of FP8 is actually worse than Q8...
If you don't want to, that's fine, just asking just in case.
Edit:- This is more for our brothers and sisters like me with 16gb VRAM, that can afford more but can't quiet go full FP16 LOL
@SencneS Q8 has good results but GGUF is slower for most people then NF4 or FP8, I would say Q8 is better quality in most cases then FP8, however most articles will tell you you can't seed to seed compare - With CPU offloading BF16/FP16 often are the same speed as GGUF so I only do the GGUF models for request
@Felldude That's what I mean, Q4 is popular because it's for the 8G VRAM brothers. I've got 16G so I can afford a little more quality. That's why I'm asking for a Q8 GGUF.
I'd be more than happy todo the conversion if I have the main file. I'll even send it to you to throw it up here.
@SencneS I have no issue with posting the Q8, but I do space out any updates to a checkpoint, with all the versions and types it becomes confusing for new users. But I would encourage you to try the FP8 model as it is the largest possible model a 16GB card user can fit, and do to it having BF16 blocks also the quality is superior and it is faster
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.






