This is a model that's between schnell and dev. It accepts guidance but can generate images in 4 steps. It has some lora merged in to give it a realistic anime style. It's not censored.
AYS and DDIM or LCM are good samplers. Some stuff like DEIS no longer produces good results. Go ahead and experiment.
Description
I upcast the FP8 to BF16 and converted back down to GGUF. Yea, its not ideal but my fault for never saving the original in BF16. As the warning goes, GGUF format is subject to change per City96. Unet or "diffusion model" only. Grab your favorite clip/t5/vae. BTW, must be zip because of civitai.
FAQ
Comments (9)
very well done!
I dropped your example image into comfy for the workflow, changing the clip1 name from the default to clip_I got it running for me.
I'm getting this issue trying to run it;
'torch' has no attribute 'float8_e4m3fn'
I've updated PiP and made sure my comfyui is up to date.
I also downloaded bits and bytes.
I've drag and dropped your workflow in and tried that, still got same error.
I'm using the portable install, so perhaps that's why it's giving me issues? Not sure. I can run fp16 models of flux, but ironically these smaller models I cannot.
EDIT
Fixed; fro people with this issue run; update_comfyui_and_python_dependencies in your comfyui portable update folder
--
So far, great model, and good start to getting this thing uncensored!
Yea, I'm on linux so these just work right off the bat. TIL my saved images have the workflows.
comfyui update wasn't working for me, so i just cloned the latest github and replaced everything that was changed recently, a little tedious (easy actually) but it worked great without having to re-install etc.
11GB why i need CLIP, VAE and T5xxl
if i choose "bnb-fp4" its ok with euler/uniform
its a pure FP8 model. You're thinking of NF4 which tends to pack them together.
I don't think I'd want to go lower than 10 steps. I tried a few more than this and all of the 4 step pictures are... very low on detail in places.
Also, you seem to have accidentally trained any flexibility / capability in terms of realistic skin texture out of the model.
See the example picture I attached to the gallery...
Yea, it needs more facial diversity and styles. You can go with less or more steps depending on the sampler. To use DEIS you need 6-8, for LCM with AYS, much less.
Good check point but sadly dosen't play well with loras.
