Use e621 tags (no underscore), Artist tag very effective in YiffyMix.
GridList Species/Artist(v64) update!! & LoRAs (SDXL)/samples/wildcards
Recommend Artist tags (NoobXL) & comfyUI workflow.
Example Setting (Use recommended sampler to get correct quality)
Setting SDXL (SDXL-lightning & NoobXL)
Steps = 12~24
Sampler = "DDPM", "Euler A SGMUniform", "Euler SGMUniform"
CFG scale = 3~4
Negative embeddings SDXL = ac-neg1, ac-neg2 (You don't really need this)
Postive LoRA SDXL = SeaArt Quality Tags LoRA (You don't really need this)
Stop at CLIP layers = 2
Setting SD 1.5 + vpred
Steps = 30~40
Sampler = "DPM++ 2M Karras", "DPM++ SDE Karras", "DDIM", "UniPC"
CFG scale = 6~8
Negative embeddings = deformity_v6, bwu [SD-WebUI\embeddings]
Stop at CLIP layers = 1
SD VAE = kl-f8-anime2, Furception [SD-WebUI\models\VAE]
Hires. fix
Hires steps = Steps * Denoising strength
Denoising strength = 0.25
Hires upscaler = 4x-UltraMix_Smooth [SD-WebUI\models\ESRGAN]
ControlNet
ControlNet = softedge_hed, control_v11p_sd15_softedge
ControlNet SDXL = softedge_hed, sdxlsoftedge-dexined, noobai-xl-controlnet
ControlNet Weight = 0.35~0.5
ControlNet Pixel Perfect = true
Wan 2.2 ComfyUI Animeted [workflow]
model = wan2.2_ti2v_5B_fp16
vae = wan2.2_vae
lora (turbo) = lora_wan2.2_ti2v_5B_turbo_lora_rank_64_fp16
lora (furry-nsfw) = wan2.2-5B_furry-nsfw-v2.0-e83
lora (livewallpaper) = wan2.2-5B_livewallpaper-720p
sampler = steps:4, cfg:1, sampler:"eular_ancestral", schedule:"sgm_uniform"
SD WebUI
LoRA Training SDXL
imgs count = 15~50
total steps = epoch * imgs count * folder loop = 3000~4500
network_dim = 64
network_alpha = 128 (SDXL) \ 16 (Noob)
learning_rate = 0.0002~0.0005
unet_lr = 0.0001 #learning_rate/2
text_encoder_lr = 0.00005 #learning_rate/4
lr_scheduler = "cosine_with_restarts"
mixed_precision = "bf16"
optimizer_type = "Adafactor"
optimizer_args = [ "scale_parameter=False", "relative_step=False", "warmup_init=False", ]
YiffyMix v4x V-pred Setting
# YiffyMix v4x A1111 SD-WebUI or Forge V-pred Setting
1.Download config file ".yaml" paste next to the model.
2.Raname ".yaml" same as model name. ( check the ".yaml" has [parameterization: "v"] line )
3.Restart SD-WebUI ( if config fails to load, the model just generate noise )
# YiffyMix v4x ComfyUI or StableSwarmUI V-pred Setting
Add node "ModelSamplingDiscrete", Sampling = "v_prediction"
# ComfyUI V-pred Setting (old version)
1.Copy config ".yaml" to [ComfyUI\models\configs] and refresh ComfyUI
2."Load Checkpoint (With Config)" [Right Click\AddNode\advanced\model_merging]
workflow:[Load Checkpoint (With Config)]-[KSampler]-[VAE Decode]-[Save Image]
# v-pred mode troubleshoot
If you use new version webui-forge and failed detect v-pred model.
Try update webui-forge (click update.bat).
When loading v-pred model, you will see this line in cmd console:
left over keys: dict_keys(['v_pred'])
# v4.x some time will get little fried issue image in this case
Use "Dynamic Prompts with __wildcards__ prompts" and "batch size > 1"
Just generate again with same prompts with parament, it will return normal.
Version Info
v1.x [2D,512~768] old model, unstable, low resolution
v2.x [2D,512~896] e621 dataset + 30~40% SD5 dataset
v3.0 [2D,3D,512~1088] large dataset then v2.x version, can do 2D, 3D
v3.1 [2D,3D,Real,512~1088]
more realistic, but loss some concept (e621 tag count lower 1000)
v3.2 [2D,3D,Real,512~1088]
unstable version, use SNR version FluffyRock, more noise detail
v3.3 [2D,3D,Real,512~1088]
stable version, more detail, more sensitive to prompts
v3.4 [2D,3D,Real,512~1088] ※include Fluffy Rock Quality Tags-LoRA
stable version, more detail, clear result, reduce some noise (like bush,pattern)
v3.5 [2D,3D,Real,512~1088]
unstable version, contrast & detailed enhance
v3.6 [2D,3D,Real,512~1088]
stable version, more sensitive to e621 tags
v3.7 [2D,3D,Real,512~1088]
stable version, reduce noise overactivity, fix blurry eyes and finger error, fix failed background depth
v4.0 [2D,3D,Real,512~1088]
v-pred version, remix from EasyFluff
accuracy anatomy and style with fewer prompt,
little dim blue average color with smooth noise, weakly negative prompt issue
v4.1 [2D,3D,Real,512~1088]
v-pred version, color and low contrast fix, reduced realistic noise.
v4.2 [2D,3D,Real,512~1088]
v-pred version, more contrast and detail, yelow issue.
if you feel too fried use Rescale CFG =0.35
v4.3 [2D,3D,Real,512~1088]
v-pred version, clear artist style, fix most yellow/brown issue
the version doesn't need CFG Rescale
v4.4 [2D,3D,Real,512~1088]
v-pred version, no more yellow/brown issue.
the version doesn't need CFG Rescale
v5.0 [2D,3D,Real,896~1536]
SDXL-lightning version, base on Compassmix XL.
v5.1 [2D,3D,Real,896~1536]
SDXL-lightning version, more e621 dataset, lower human face, better NSFW stuff.
v5.2 [2D,3D,Real,896~1536]
SDXL-lightning version, increase average quality. Little unstable than v51 but more creative.
v6.0 [2D,3D,res:896~1536]
More character better sex pose, limited and uncontrollable style (mostlly anime style).
v6.1 [2D,3D,Real,896~1536]
More realistic detail, reduce anime style, fix flat and boring background.
v6.1a-RE [2D,3D,Real,896~1536]
Same as v61 but adjust average style to simi-realistic.
v6.2 [2D,3D,Real,896~1536]
more effective prompt, keep style also with good realistic detail, saturation down little bit.
v6.3 [2D,3D,Real,896~1536]
Detail(noise) level between v62 and v61, upgrade character/artist accuracy.
v6.4 [2D,3D,Real,896~1536]
Upgrade light quality and average detail.
Note about NoobXL to create stable furry (v6x):
use "furry" tag in propmt can let AI create furry not human.
use "no human" tag in propmt can avoid NoobXL model add human and reduce anthro affect (more original style).
use "anime style" in negative propmt can reduce classic booru style and more realistic.
Some SD prompt trick:
Combine two character:
characterA \(characterB\)
Avoid tag bleeding:
(chain:0)-link fence
(cowboy:0) shot
high (collar:0)
Multi-tag combine, enhance and reduce token use:
from side + side view = from side view
crossed legs + legs up = crossed legs up
Basic Style
Negative Prompt SD1.5
unusual anatomy, mutilated, malformed, watermark,
amputee, mosaic censorship, sketch, monochrome
Negative Prompt SDXL
malformed, worst quality, bad quality, signature, text, url
3D Artwork Style
Prompt v5x: blender \(software\), ray tracing, 3d, unreal engine
Photorealistic Style SDXL
Prompt v5x:
by Mandy Disher, by Wim Wenders, by Robert Rauschenberg, [by Fossa666::0.65],
ultra realistic, photorealism, photograph
Negative prompt v5x: sketch, manga, vector, line art, toony
Prompt v6x: film photography, photorealistic, film grain
Negative prompt v6x: anime style, vibrant, pastel
Photorealistic Style SD1.5
Prompt sd15: [:photorealistic, analog style, realistic, photorealism:0.5]
Negative prompt sd15: [:bwu:0.5]
Recommend Refiner: IndigoFurryMix v110-Realistic Switch at 0.5
Description
# YiffyMix v50 SDXL doesn't need PonyXL score tags #
# Don't use SD1.5 vae in SDXL, you will get fried image #
use setting [steps=12~24, cfg=4~6]
recommend sampler :
smooth : ["DDPM", "Euler A Turbo", "Euler A SGMUniform"]
detail and balance : ["Euler", "Euler SGMUniform"]
high detail and high contrast : ["DPM++ 2M SDE SGMUniform"]
Recipes v50:
partA = Compassmix XL + Juggernaut XL v9 * 0.35
YiffyMix v50 = partA + ClipL[YiffyMix v37]
FAQ
Comments (90)
is it possible to have the 5x models usable on civit?
I can't select model to use in online civit gen.
Any chance for an SD1.5 version of this, please?
I hope you don't stop updating the v4x model. it's quite good for low ending setups
I have been using 1.5 based models almost exclusively so far. Turns out I need about as much VRAM for a 1024x1024 gen with this XL model as for a 1.5 hires fix from 640x640 to 1600x1600. That seems like a bit of a deal breaker. And I'm not sure 8GB of VRAM are even considered low-end.
Any recommended VAE for v50-SDXL?
IMHO v44 produce better and faster results than v50-SDXL.
i agree. pony xl is better
What's worse is that the vram usage is much higher than for 1.5 based yiffymix models. I can't generate as hi res as i used to. Think I won't be using this pony/XL-based branch at all.
Any chance for a pony version
Finally, an XL version of YiffyMix. I'm still waiting for XL versions of Fluffyrock, Furryrock, BB95 and EasyFluff.
I just tried the XL version and it is not able to generate anatomically correct female genitalia. A great shame, as I was very excited about this version and I was very disappointed.
same for me i keep trying the different settings but nothing seems to work
male genitalia as well, a former strong point of all recent yiffy models
When I select the XL model I get the following error:AttributeError: 'NoneType' object has no attribute 'lowvram'
please help me fix this.
Hi, did you manage to solve the problem?
@Vetras Good afternoon, no way switched back to the 44 model. Maybe my laptop is not suitable for sdxl model or something else. And so I have a laptop with gtx 1060 6 gigabytes and intel core i 7 8750H processor and 8 ram. I would really like to solve this problem and try the SDXL model.
(if there's anything wrong or unclear I've written. I just used a translator.)
Did you use "--lowvram --precision full --no-half --skip-torch-cuda-test" in webui-user COMMANDLINE_ARGS
I found this problem only in Automatic1111 new version v1.9.3
Or, you can try webui-forge (include git and python) same interface as A1111 WebUI (remember click update.bat in first run)
@chilon249 Good afternoon, I will try your steps to correct the errors. Yes I am using Automatic1111 maybe the problem is because of that.
@chilon249 so I guess I'm gonna have to put in webui-forge. At my speed, it'll take a day to download, but okay.
@chilon249 ,No, the most interesting thing. Everything worked well on Automatic, but Forge...
Funny. The secondary download solved the problem
YiffyMix v50 SDXL doesn't need PonyXL score tags
Doestn add score tags...gets colorful blobs, adds score tags, starts working
Is there a config file for the v50 version?
it's not vpred so no
which vae should i use, it's my first time using SDXL
Also any recommended negative or positive embeds to use?
Quality Embeddings For Furries (bwu, dfc, ubbp, updn) is not compatible here
@chilon249 Thank You
Can;t wait to see what you cook up with SDXL :)
When will the XL model be getting a grid list?
yeah, xl is here, its officialy over for me (amd user), now im going to leave stable diffusion behind and dance naked in a fertile flower field, or maybe try to find a philosopher's stone and start to alchemymaxxing
Hello, im also AMD user and i can use many SDXL checkpoints out there, if you have the VRAM (I have 16gb in my 6800XT) you can use zluda with stable diffussion to continue making pictures with AI, there are plenty tutorials that explains how to make it works. hope it helps.
@jesussalasgil995 I watched some videos about zluda, but my GPU is not compatible with it. But I'm good for now. Maybe I'll play around with sd 1.5 sometimes. Even so, thanks for the help
What GPU do you have? I've managed to run many SDXL models when I had an RX 6600XT with 8GB of VRAM (on Linux), but it's not pretty, you need many optimization options enabled and inference time is really long on high-res.
v50 is a real struggle for me to get anything decent out of it.
Well it's early, I am looking forward to seeing it's development. An alternative to pony would be awesome, more options never hurts.
Curious what resolution people are generating on with v50, and how much VRAM they have for it...
Also curious, if e6 artist tags (and other tags) still work.
(edit: for myself, 1024x1024 seems to be the limit with 8GB of NVidia VRAM. Maybe using -medvram (A1111) would make a small difference, haven't tried yet, but with 1.5 models I never noticed one)
Heavy user here- I have 24gb vram (nvidia 4090) and I am rendering these at 1024x1024, highres.fix 2x resolution.
E6 tags work fine! some small differences between v.43/44 but otherwise the same.
3060Ti 8GB, 32GB RAM, 20GB Virtual Memory on SSD
1024x1024 work fine, take ~20-30 sec, HR up to 2x without scripts take ~8-10min, 1.5x - ~2-4 min
hi, 4060Ti with 16Gb VRAM works really well, (a bit slow compared with 4070Ti or higher, but works :D), 1024x1024, 1152x896 or 1216 x 832, i've got a 3040x2080 without script, (1216 x 832 upscaled x2.5) , e621 tags are okay most of the times but it doesn't always know some characters, like Susie of Deltarune. (sorry for my bad english)
RX 5700 8GB vram here, runs fine at 1024x1024 on --medvram, at the start it took ~4 minutes with Zluda but after tweaking some I got it down to around 2 minutes 40 secs per image
Does this work on SD 1.5?
50 is pretty good so far, although not head and shoulders better than pervious versions I think that there's a ton of room for improvement, it can put out some pretty great results.
As an Addendum to my thoughts on V50; it's best feature is the faces, with or without Adetailer, it creates spectacular faces, it's weakest point so far is male characters- I can't tell if it's artist dependent but it just doesn't like male parts a lot of the time
i am having trouble having problems with your guide for YiffyMix v50 SDXL, i cant find where i am suppose to get the config file from, i looked around and was unable to find it :(
Config file need only for v4 models. For v5 config file not needed.
but i am quite confused what are the v4 or v5 configs i see no one explaining it in detail so i can understand, i feel like i am reading somthing without context
@champagne1 If you download a v4 model (that's v40-v44 models), you also need to download a small config file and place it if same folder with model). You can find config file for v4 models in download section for each model. If you download v5 model, config file not needed. You just place model file in folder and use it as usual.
config files are for checkpoints using v-pred only.
v50 doesn't use v-pred
ok so i dont need the config file, and i have not used one for v50sdxl then why dose my comfyUI Yiffymixv50SDXL setup not work, i followed the guidelines showen, and i cant make it work if my life dependent on it at lest so the photos make sens or follow prompts, will try again later to troubleshot it or if you got any advice i be really happy,
I put comfyUI workflow examples here, you can try it.
@chilon249 finely made it work with the workflow i know understand it better, thanks for all the help
I am also having problems... I am on A1111 (don't know if that one needs the yaml or not, but I also can't find it if it does x.x) and even though I follow all of the specified settings, I end up getting a fried image. I've copied some of the examples users have posted exactly, and got the exact same image -- just fried. I'm using the correct VAE as well... Any idea? A1111 webui is up to date, and I just downloaded v50SDXL.
@LionFuzz so i have found out what was wrong from chilon249 and hes workflow for ComfyUI, but i allso manged to complet som on SD-A1111, and as i understand it you are not supose to use the VAE, just use checkpoint and then the same for refiner i think that is how it works for the A1111 as well, but i am not sure still struggling with getting the ComfyUI to do what i want in prompts as well as A1111 YIffymixv50
Why SDXL Lightning??? So much quality is lost through this method. Its really for quickly testing concepts.
Would you be able to release a normal SDXL version or a Pony version?
If you really set on doing something different with SDXL versions why not DPO?
I'm not finetune trainer, I just build recipes and merge model.
YiffyMix relies on other models, this time is CompassMix XL lightning model.
Not everyone have good GPU, normal SDXL too slow, lightning model is good option to save time.
I gotta say it still retains quality in a way I never would have expected its pretty damn good,
CompassMix XL lightning model was an interesting choice and it performs great.
I dont know SHIT about model merging, but yiffy + DucHaiten could have crazy good potential!
WHY SO SLOW DOWNLOAD
WHY DO MINE LOOK SO BAD
Every new model you try (or version thereof) takes some getting used to, as well as a bit of relearning at first.
why the caps ?
Please fix Yiffymix, i'm getting low quality generations with the same promts
Thanks for this model. I really like the ability to control the light the way I want it
Why is v36 the only version with in site generation avaliable???
Does anyone have a luck in genning a male and female (two characters)?
No such luck, sometime its two females, or two males. even with the use of BREAK tag
Only with controlnet or inpainting, unfortunately
@C1yde I also use Regional prompt, which I have 50/50 luck of using it.
@koopa990 I've tried that a bit as well but it's just way too finicky in even the best cases and completely breaks when using LORA
@C1yde maybe in the close, close future, the machine can be capable of genning multiple people without a sweat!!
@koopa990 Haha I feel your pain, I can get it to somewhat work in SD1.5 IndigoFurryMix when prompting chars like Krystal and FoxMcCloud, but even then I have to use undies instead of genitalia, otherwise it mixes them up in the most silly ways, lol. I hope AI results becomes better over the next 1-2 years without complex add-ons or hacks!
Not sure why but no matter what I do, the model fails to load, and then gets in a loop :(
The v50 one specifically
Now Pony version >:(
Can I use E621 tags that are aliased to another tags? I don't know if it would make a good result.
I'm doing something wrong clearly, because with the same prompts and settings as the example images gives me terrible quality images - Legible, just as if drawn with crayons. I'm using A1111. Do other people have this issue? What was the root cause?
same as me, i copy all the same settings and prompts. it also happens on pony checkpoints as well
if i change the character, or a few keywords, the output are is completely different
Same here, I've tried to get the newest version working for me off and on for weeks now, but I just can't seem to make it create a halfway decent image. Been sticking with v44 as of late.
Follow the YiffyMix v50 SDXL-lightning setting first.
In YiffyMix v50 only these sampler have high quality output.
smooth : ["DDPM", "Euler A Turbo", "Euler A SGMUniform"]
detail and balance : ["Euler", "Euler SGMUniform"]
high detail and contrast : ["DPM++ 2M SDE SGMUniform"]
A1111 emphasis parentheses may gen empty noise image.
Check vae is "none" or "SDXL version vae", don't use SD1.5 vae.
are you using a SDXL VAE? because if you are using a different VAE it might be frying your images
@chilon249 Thanks for your comment. I did follow the instructions on the page properly. VAE set to none (tried the SDXL version vae too) and samplers set correctly. I am running a1111 with the arguments --medvram-sdxl --xformers --no-half - However, this has not impacted any previous versions of the model, they all still work great. Haven't quite fixed it yet!
plan pony version
I don't know if it is just a "me" thing, but the v5.0 SDXL version of the model seems MUCH more prone to creating human, or predominantly human, figures with prompts that created perfectly normal furries before with v4.2 and v4.4. They often have little or no furry characteristics at all? Is this because there's no VAE to guide it yet, or something wrong with my prompting triggering something?
I have this problem too. Approximate 3 of a 10 images is humanlike creatures and not furries. Well, i think that is a problem with SDXL model from which v50 was created.
SDXL has more SD artist reproduction than SD1.5 model.
If you use classical artists (like Pino Daeni), most works are human portraits, you might get humanoid face.
Try use lower strength SD artist, put "humanoid" in negative prompt or add E621 artists to balance it.
It's good, but I would like to have more artists tags learnt, also sometimes it doesn't like to produce the genitals. The sex positions are okay, but the genitals don't show up. I often do the base generation with an other model and use this for img2img+upscale
Why do my images always come out more saturated than they should be? How do I fix that. I'm using the proper VAE
If you're using the 4.x series, you need to use cfg rescale. For A1111, download this extension https://github.com/Seshelle/CFG_Rescale_webui, for ComfyUI, just use the built-in RescaleCFG node.
@CosmicElement Thank you so much, this fixed the problem for me =]
Here's a recommendation for the next SDXL version of this model: Use the XL version of Fluffyrock when it releases, and maybe Seaart Furry XL or Indigo Furry Mix XL, instead of Compassmix and Juggernaut XL.
Using Compassmix and Juggernaut XL as the composition made the experience I got requesting images to the SD1.5 based versions of YiffyMix and similar models somewhat useless.
Meanwhile, you can make a lora similar to PDV6XL artist tags.
Well..There's no FluffyRock XL yet, CompassMix XL is mose closest FluffyRock XL model.
Use pony base recipes, you will loss lot of SD and e621 artists tag and some species.
Pony XL highly incompatible to any SDXL v1.0 model.
If a model need lot of LoRA patch to use, I can't say is good model.
(p.s. IndigoFurryMix XL is pony base, CompassMix XL, BananaStrike XL, YiffyMix v5x is base SeaArtXL)
WD-KL-F8-Anime2 seems to be missing, is there an alternative we should use?
you can find it here https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt. An alternative VAE for the 1.5 models is Furception https://huggingface.co/RedRocket/furception_vae/blob/main/furception_vae_1-0.safetensors. For sdxl models use https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors. Most models should come with their own VAE so you can just set the VAE setting to none or automatic
Details
Files
Available On (12 platforms)
Same model published on other platforms. May have additional downloads or version variants.



















