CivArchive
    Preview 4868226
    Preview 4868217
    Preview 4868225
    Preview 4934902
    Preview 4868213
    Preview 4868207
    Preview 4868208
    Preview 4868209
    Preview 4868210
    Preview 4868205
    Preview 4868206
    Preview 4868219
    Preview 4868222
    Preview 4868221
    Preview 4868212
    Preview 4868214
    Preview 4868211
    Preview 4868216
    Preview 4868220
    Preview 4868215

    Use e621 tags (no underscore), Artist tag very effective in YiffyMix.

    GridList Species/Artist(v64) update!! & LoRAs (SDXL)/samples/wildcards

    Recommend Artist tags (NoobXL) & comfyUI workflow.

    Setting SDXL (SDXL-lightning & NoobXL)

    • Steps = 12~24

    • Sampler = "DDPM", "Euler A SGMUniform", "Euler SGMUniform"

    • CFG scale = 3~4

    • Negative embeddings SDXL = ac-neg1, ac-neg2 (You don't really need this)

    • Postive LoRA SDXL = SeaArt Quality Tags LoRA (You don't really need this)

    • Stop at CLIP layers = 2

    Setting SD 1.5 + vpred

    • Steps = 30~40

    • Sampler = "DPM++ 2M Karras", "DPM++ SDE Karras", "DDIM", "UniPC"

    • CFG scale = 6~8

    • Negative embeddings = deformity_v6, bwu [SD-WebUI\embeddings]

    • Stop at CLIP layers = 1

    • SD VAE = kl-f8-anime2, Furception [SD-WebUI\models\VAE]

    Hires. fix

    • Hires steps = Steps * Denoising strength

    • Denoising strength = 0.25

    • Hires upscaler = 4x-UltraMix_Smooth [SD-WebUI\models\ESRGAN]

    ControlNet

    Wan 2.2 ComfyUI Animeted [workflow]

    SD WebUI

    LoRA Training SDXL

    • imgs count = 15~50

    • total steps = epoch * imgs count * folder loop = 3000~4500

    • network_dim = 64

    • network_alpha = 128 (SDXL) \ 16 (Noob)

    • learning_rate = 0.0002~0.0005

    • unet_lr = 0.0001 #learning_rate/2

    • text_encoder_lr = 0.00005 #learning_rate/4

    • lr_scheduler = "cosine_with_restarts"

    • mixed_precision = "bf16"

    • optimizer_type = "Adafactor"

    • optimizer_args = [ "scale_parameter=False", "relative_step=False", "warmup_init=False", ]

    YiffyMix v4x V-pred Setting

    # YiffyMix v4x A1111 SD-WebUI or Forge V-pred Setting

    1.Download config file ".yaml" paste next to the model.

    2.Raname ".yaml" same as model name. ( check the ".yaml" has [parameterization: "v"] line )

    3.Restart SD-WebUI ( if config fails to load, the model just generate noise )

    # YiffyMix v4x ComfyUI or StableSwarmUI V-pred Setting

    Add node "ModelSamplingDiscrete", Sampling = "v_prediction"

    workflow demo

    # ComfyUI V-pred Setting (old version)

    1.Copy config ".yaml" to [ComfyUI\models\configs] and refresh ComfyUI

    2."Load Checkpoint (With Config)" [Right Click\AddNode\advanced\model_merging]

    workflow:[Load Checkpoint (With Config)]-[KSampler]-[VAE Decode]-[Save Image]

    # v-pred mode troubleshoot

    If you use new version webui-forge and failed detect v-pred model.

    Try update webui-forge (click update.bat).

    When loading v-pred model, you will see this line in cmd console:

    left over keys: dict_keys(['v_pred'])

    # v4.x some time will get little fried issue image in this case

    Use "Dynamic Prompts with __wildcards__ prompts" and "batch size > 1"

    Just generate again with same prompts with parament, it will return normal.

    Version Info

    v1.x [2D,512~768] old model, unstable, low resolution

    v2.x [2D,512~896] e621 dataset + 30~40% SD5 dataset

    v3.0 [2D,3D,512~1088] large dataset then v2.x version, can do 2D, 3D

    v3.1 [2D,3D,Real,512~1088]

    more realistic, but loss some concept (e621 tag count lower 1000)

    v3.2 [2D,3D,Real,512~1088]

    unstable version, use SNR version FluffyRock, more noise detail

    v3.3 [2D,3D,Real,512~1088]

    stable version, more detail, more sensitive to prompts

    v3.4 [2D,3D,Real,512~1088] ※include Fluffy Rock Quality Tags-LoRA

    stable version, more detail, clear result, reduce some noise (like bush,pattern)

    v3.5 [2D,3D,Real,512~1088]

    unstable version, contrast & detailed enhance

    v3.6 [2D,3D,Real,512~1088]

    stable version, more sensitive to e621 tags

    v3.7 [2D,3D,Real,512~1088]

    stable version, reduce noise overactivity, fix blurry eyes and finger error, fix failed background depth

    v4.0 [2D,3D,Real,512~1088]

    v-pred version, remix from EasyFluff

    accuracy anatomy and style with fewer prompt,

    little dim blue average color with smooth noise, weakly negative prompt issue

    v4.1 [2D,3D,Real,512~1088]

    v-pred version, color and low contrast fix, reduced realistic noise.

    v4.2 [2D,3D,Real,512~1088]

    v-pred version, more contrast and detail, yelow issue.

    if you feel too fried use Rescale CFG =0.35

    v4.3 [2D,3D,Real,512~1088]

    v-pred version, clear artist style, fix most yellow/brown issue

    the version doesn't need CFG Rescale

    v4.4 [2D,3D,Real,512~1088]

    v-pred version, no more yellow/brown issue.

    the version doesn't need CFG Rescale

    v5.0 [2D,3D,Real,896~1536]

    SDXL-lightning version, base on Compassmix XL.

    v5.1 [2D,3D,Real,896~1536]

    SDXL-lightning version, more e621 dataset, lower human face, better NSFW stuff.

    v5.2 [2D,3D,Real,896~1536]

    SDXL-lightning version, increase average quality. Little unstable than v51 but more creative.

    v6.0 [2D,3D,res:896~1536]

    More character better sex pose, limited and uncontrollable style (mostlly anime style).

    v6.1 [2D,3D,Real,896~1536]

    More realistic detail, reduce anime style, fix flat and boring background.

    v6.1a-RE [2D,3D,Real,896~1536]

    Same as v61 but adjust average style to simi-realistic.

    v6.2 [2D,3D,Real,896~1536]

    more effective prompt, keep style also with good realistic detail, saturation down little bit.

    v6.3 [2D,3D,Real,896~1536]

    Detail(noise) level between v62 and v61, upgrade character/artist accuracy.

    v6.4 [2D,3D,Real,896~1536]

    Upgrade light quality and average detail.

    Note about NoobXL to create stable furry (v6x):

    use "furry" tag in propmt can let AI create furry not human.

    use "no human" tag in propmt can avoid NoobXL model add human and reduce anthro affect (more original style).

    use "anime style" in negative propmt can reduce classic booru style and more realistic.

    Some SD prompt trick:

    Combine two character:

    characterA \(characterB\)

    Avoid tag bleeding:

    (chain:0)-link fence

    (cowboy:0) shot

    high (collar:0)

    Multi-tag combine, enhance and reduce token use:

    from side + side view = from side view

    crossed legs + legs up = crossed legs up

    Basic Style

    Negative Prompt SD1.5

    unusual anatomy, mutilated, malformed, watermark,

    amputee, mosaic censorship, sketch, monochrome

    Negative Prompt SDXL

    malformed, worst quality, bad quality, signature, text, url

    3D Artwork Style

    Prompt v5x: blender \(software\), ray tracing, 3d, unreal engine

    Photorealistic Style SDXL

    Prompt v5x:

    by Mandy Disher, by Wim Wenders, by Robert Rauschenberg, [by Fossa666::0.65],

    ultra realistic, photorealism, photograph

    Negative prompt v5x: sketch, manga, vector, line art, toony

    Prompt v6x: film photography, photorealistic, film grain

    Negative prompt v6x: anime style, vibrant, pastel

    Photorealistic Style SD1.5

    Prompt sd15: [:photorealistic, analog style, realistic, photorealism:0.5]

    Negative prompt sd15: [:bwu:0.5]

    Recommend Refiner: IndigoFurryMix v110-Realistic Switch at 0.5

    Description

    Recipes v40:

    partA = ((EasyFluff v11.1 + YiffyMix v36 - FluffyRock-e99-snr-e72) * trainDiff:1.0

    partB = (partA + Newdawn v52 - SD5) * trainDiff:0.45

    partC = (partB + IndigoFurryMix v35 Realistic - YiffyMix v22) * trainDiff:0.45

    preYM40 = partC + CLIP:[ReV Animated * 0.35 + EasyFluff v11.2 * 0.27]

    partE = EasyFluff v10-Funner + preYM40 * 0.55 + CLIP:[ReV Animated * 0.35 + EasyFluff v10-Funner * 0.27]

    YiffyMix v40 = partE + LoRA:[Detail Tweaker * 0.2]

    FAQ

    Comments (119)

    lanceshockerDec 25, 2023
    CivitAI

    You mind giving a little more details on what needs to be done on the config file? I'm not sure what you mean by rename the .yaml file. do I rename it to .safetensors?

    PsySpyDec 25, 2023

    It means to rename the yaml to match the safetensor, so it's like yiffymix4.yaml to go with the yiffymix4.safetensors

    PunoMycinDec 25, 2023

    @PsySpy You commented nothing pungent

    lanceshockerDec 26, 2023

    @PsySpy That still doesn't explain anything though... if two of the same file exits in the same folder then it's just gonna be renumbered

    nsfwpersonalai2Dec 25, 2023· 1 reaction
    CivitAI

    I would like Civitai to have support for v-pred models with .yaml files for the online image generator.

    EternalTrashDec 25, 2023
    CivitAI

    I cannot create ANYTHING with the new V40 Version. Everything turns into a white blury mess of dots... Anyone else has the same problem or has an idea what's wrong? V36 works fine, also, don't have this issue with any other models.

    morric58Dec 25, 2023· 2 reactions

    Did you set cfg rescale to 0.7 ? Its a vpred model so you need the config file in the model folder and you need to set cfg rescale, typically to 0.5 or 0.7.

    EternalTrashDec 25, 2023

    @morric58 Thank you, I added the config file, which made pictures apprear. I don't have a CFG Rescaler tho (it seems to be an additional module for SD and the config file doesn't have any values with this name) and the installation seems to be more bother than worth, so I'll leave it at that and use a different model.

    tsoetDec 25, 2023· 1 reaction

    @EternalTrash Read the 'about this version' text at the top of the page, just below the download options for instructions. You need the yaml config file and probably want the a1111 extension linked, or click the link to 'easyfluff' in the expanded readme and you can get comfyui module from its model card instructions.

    angelhardH833Dec 25, 2023
    CivitAI

    This model is incredible, it can adapt to the style you are looking for, and it does it very well. 10/10 👍

    sansmiaDec 25, 2023
    CivitAI

    I've noticed with identical settings, this new v40 model takes significantly longer to generate images. Is that a result of the v-pred stuff? I've never dealt with a v-pred model before.

    tombotDec 25, 2023
    CivitAI

    So this is a bit odd, but v40 works with Easy diffusion just fine, but only if you use the WD-KL-F8-Anime2 VAE you reccomended. Trying to use the checkpoint on it's own makes it look burned out/deepfried. Did you not bake the VAE into the checkpoint file this time around?

    ATJonzieDec 26, 2023

    I want to know as well, same issue!

    ATJonzieDec 26, 2023

    read the about this version description and follow along, works fine now!

    BadcoonieDec 26, 2023

    Weird how 4.0 seems to work just fine on Easy Diffusion but, even after following all the directions under 'About model', I still can't get it work work in webui AT ALL. Oh well

    pihlawrkr738Dec 25, 2023
    CivitAI

    Works just fine for me, no issues that I can see.

    CookcookiesDec 26, 2023· 5 reactions
    CivitAI

    Weird, V36 does work but using the new V40 doesn't work.

    2657989Dec 26, 2023
    CivitAI

    Do you have instructions about how to get the model to work correctly with ComfyUi. I have made sure that the config file and the checkpoint file have the same name but with different file types (yiffymix_v40.safetensors & yiffymix_v40.yaml). I have tried having them placed next to each other in the checkpoint folder and then tried having the yaml file placed in the configs folder. I have acquired a cfgrescale node but regardless it does not work :(. The output is just a noisy mess. What am i doing wrong, or can the model only be used with the WebUi right now?

    chilon249
    Author
    Dec 26, 2023· 1 reaction

    Copy config "YiffyMxi v40.yaml" to [ComfyUI\models\configs] and refresh ComfyUI

    Use "Load Checkpoint (With Config)" node [RightClick\AddNode\advanced\model_merging]

    2657989Dec 26, 2023

    Thanks :), it works now. I just have to tinker with it a bit to get the optimal output. And while I have your attention, I just want to say how much I and the rest of the community appreciate your models and the work you put into them. Keep up the good work, you are making a difference <3

    paulosergiocardozoDec 26, 2023· 2 reactions
    CivitAI

    Is it possible to launch a version of your model without needing external files? I use your model on sites like " seaart.ai " and " tensor.art " they don't support any type of files that have to be added externally. Thank you for your attention.

    armin321ffxiv873Dec 27, 2023
    CivitAI

    Sadly don't work for me, i tried everything

    MobbunDec 27, 2023

    "This checkpoint includes a config file, download and place it along side the checkpoint."

    armin321ffxiv873Dec 27, 2023

    @Mobbun I did it, but still don't work, the other versions works finely, but this one, idk why it don't work,

    mtndogs895Dec 27, 2023· 2 reactions
    CivitAI

    For SDXL Comfy UI users this may help if you have problems...

    C:\ ... \ComfyUI\models\checkpoints\yiffymix_v40.safetensors

    C:\ ... \ComfyUI_windows_portable\ComfyUI\models\configs\yiffymix_v40.yaml

    At first Yiffymix_40 gave a white screen result. It took me a few minutes to figure out what

    "Use "Load Checkpoint (With Config)" [Right Click\AddNode\advanced\model_merging]"

    was since it didn't match what was shown on my screen as I followed that line. All the choices under model_merging didn't work for me.

    Should read as:

    Use "Load Checkpoint (With Config)". Right Click > AddNode > advanced > loaders > Load Checkpoint With Config (DEPRECATED).

    Yiffymix_40 now works on my SDXLv1.0, ComfyUI: 1857 (2023-12-26) Manager: V1.5.2

    60987Dec 27, 2023· 2 reactions
    CivitAI

    an error has occurred! because the lora model cannot be edited only because of the version on v4.0. I've already done training/generator with my lora and suddenly it's a blurry lora image and that's a mistake. https://imgur.com/d35WnS7 https://imgur.com/kn1XUev https://imgur.com/O4tLWeS https://imgur.com/3ggwoal

    kurikoaiDec 27, 2023· 9 reactions
    CivitAI

    v40 on A1111 doesn't work. Definitely need some sort of instructions.

    MobbunDec 27, 2023· 1 reaction

    "This checkpoint includes a config file, download and place it along side the checkpoint."

    RetroSharkBiteDec 27, 2023

    @Mobbun Did this, made sure I had CFG rescale, and it comes out looking quite bad using the example prompts and settings. Will just have to stick with v36 until some kind of tutorial comes out.

    bakecrustoDec 27, 2023· 10 reactions
    CivitAI

    A1111 instructions for v40:
    - Make sure your webui is up to date;
    - Download the config file (link under the model download button), make sure it's named the same as the Model file but with .yaml extension instead of .safetensors, and throw it in the same folder as the model;
    - Throw a copy of the config file into the webui/configs folder, just to be sure;
    - Check if you have the CFG Rescale extension enabled, if not, enable it - if you don't have it in your Extension list, add it using url https://github.com/Seshelle/CFG_Rescale_webui
    - Restart your webui;
    - When generating, enable CFG Rescale and set it to 0.5 or 0.7, and use the VAE provided by the model creator;
    - Everything should be working fine now

    60987Dec 27, 2023

    But... My lora training doesn't work! I tried this for kohya trainer but my lora models image are blurry.

    RetroSharkBiteDec 27, 2023

    "Check if you have the CFG Rescale extension enabled, if not, enable it - if you don't have it in your Extension list, add it using url https://github.com/Seshelle/CFG_Rescale_webui"

    You say this, but how does one actually install it?

    RetroSharkBiteDec 27, 2023

    Figured out how to install with URL, but no idea what a VAE is. Images are coming out looking quite jank.

    RetroSharkBiteDec 27, 2023

    Cheese and crackers. Okay, so you weren't joking about needing that VAE. My google-fu is weak looking up install instructions for this stuff. Cheers.

    60987Dec 28, 2023

    @RetroSharkBite look buddy, I edit for kohya trainer and not stable diffusion webui.

    60987Dec 28, 2023

    I could try but how?

    60987Dec 28, 2023

    @RetroSharkBite @mobbun Haha! Now I know what that was! Now they are not blurry for my Lora pictures and everything runs normally. what that was, that was on "v_parameterization". I didn't know that I could set it up, but anyway, I've already set it up and it's running normally again! :)

    60987Dec 28, 2023

    I hope!

    60987Dec 28, 2023· 1 reaction

    YES!!! https://i.imgur.com/VtPpGX1.png That worked really well and of course I'm happy. :D ;3

    mtndogs895Dec 27, 2023· 4 reactions
    CivitAI

    I see Stable Diffusion 2.0 is out. November 24th 2023. Maybe SDXL 2.0 to hit the wires sometime next year ?

    https://stability.ai/news/stable-diffusion-v2-release

    60987Dec 27, 2023

    @mtndogs895 I'm sorry, what? That's supposed to be a joke, right.....

    MobbunDec 27, 2023

    I'd check the year. That's from 2022, over a year old. Funnily enough, you're not the first person I've seen to make that mistake.

    mtndogs895Dec 27, 2023

    My bad. Thanks for pointing that out.

    Kleo758Dec 27, 2023
    CivitAI

    i am using A1111 and have followed the installation instructions but my generations are coming out rather blurry and odd in color compared to v36.

    What am i doing wrong?
    here are the v36 and v40 examples of the same generation
    v36 https://puu.sh/JXB26/ca2fdb256d.png
    v40 https://puu.sh/JXB28/7a3fb120da.png

    RetroSharkBiteDec 27, 2023· 2 reactions

    Okay, figured it out:
    You need this ->https://civitai.com/models/118561/anything-kl-f8-anime2-vae-ft-mse-840000-ema-pruned-blessed-clearvae-fp16cleaned
    Put it in your VAE folder. What's that? It's in your models folder.
    In the Web UI go to setting > User interface.
    Under [Quicksettings list] click the drop down arrow and select 'sd_vae', scroll up and press apply, then reload. Now next to your model selection select fancy new VAE.

    Tada! No more deepfried images with v40

    Kleo758Dec 27, 2023

    @RetroSharkBite Worked like a charm! Thanks a lot!
    here is the image i got: https://puu.sh/JXBtU/b9e1d03895.png

    RetroSharkBiteDec 27, 2023
    CivitAI

    You really need the VAE with A1111 if you don't want deepfried images.


    Download it, put it in the stable-diffusion-webui\models\VAE folder.

    Go to the web ui settings, User interface, click the arrow under 'Quicksettings list', then find and click 'sd_vae'.

    Scroll up, click apply, then reload UI.

    There'll be a new drop down next to model in the web UI for your new VAE. Load it up and you should be good to go.

    tombotDec 28, 2023

    Same deal with what I'm using, they should add that to the usage instructions.

    Argon42Dec 28, 2023

    But the instructions say you're supposed to use the vae kl-f8-anime2
    edit: also just noticed that the vae isnt even in the downloads anymore in this version...

    tombotDec 28, 2023

    @rantas I can't say for certain, but I don't think that part of the instructions was there when I posted that or when I downloaded it initially. Either way it's there now, which is good.

    Northern_DragonDec 28, 2023
    CivitAI

    It took me a while to dial in exactly how to get good results out of v40 on a1111, and I'm mostly loving it, but I find I'm getting my images... Not very colorful. The results are good otherwise.

    dillon101Dec 28, 2023

    Download the VAE

    Argon42Dec 28, 2023
    CivitAI

    My gens look very different with v40 than with any earlier version. I am using both the config file and the kl-f8-anime2 vae. They are not distorted or anything. Actually, they don't look bad. It's just that they are much darker than before and the backgrounds look very different composition-wise (somehow simpler) than what I'm used to.

    Am I supposed to use CFG Rescale>0? Am I doing something wrong?

    It would be cool to get some feedback. Is everyone experiencing this?

    chilon249
    Author
    Dec 29, 2023· 3 reactions

    You are right, this vpred model sometime look dull.

    That's why recommend vpred model use with "refiner".

    none-vpred model : more creative but unstable

    vpred model : much stable and accurate but less creative (for now)

    The v4 final goal will be same quality with v3 version.

    Argon42Dec 29, 2023

    @chilon249 Thank you for replying. So, I am not doing anything wrong and it's normal that the images are darker and stick more closely to the prompt without adding much detail freely, resulting in more "dull and empty" backgrounds?

    The backgrounds remind me of fluffyrock now... Are you sure CFG Rescale isn't required?

    chilon249
    Author
    Dec 29, 2023· 1 reaction

    Vpred model needs stronger prompt guidance to build detail.

    Use negative prompt like "sketch, monochrome" can fix some dull problem.

    The original FluffyRock-vpred-snr like a highly compressed model, need CFG Rescale decompress to use it.

    After merge, high compression will be diluted, just need low level CFG Rescale or no needed.

    216396Dec 29, 2023
    CivitAI

    I hope the next Update won't give us deep fried images

    corinommerli890Dec 29, 2023· 2 reactions
    CivitAI

    why are the images so horribly bad, it looks so crunchy and deepfried

    TribalDragonDec 30, 2023· 11 reactions
    CivitAI

    Shits broken pal

    chilon249
    Author
    Dec 30, 2023· 5 reactions
    CivitAI

    Re-upload v40 models with WD-KL-F8-Anime2 VAE baked version, which may have different hash value.

    Argon42Dec 30, 2023

    Does this mean you have to remove the vae from settings->vae now?

    chilon249
    Author
    Dec 30, 2023

    VAE settings on "none" or "automatic" will use baked one.

    If you use another VAE, it will overwrite.

    IdanisDec 31, 2023

    where is this "Baked" version? sorry I'm not use to a lot of this yet

    tombotJan 4, 2024

    @Idanis I think it replaced the original checkpoint download, if you download it again then that should be the one you want.

    goxabo1145Dec 30, 2023· 9 reactions
    CivitAI

    All pics are fully noisy and 0 details, the 3.0 version still better.

    mentholJan 3, 2024· 1 reaction

    Gotta use the config file.

    cAIkDragonDec 30, 2023· 1 reaction
    CivitAI

    nipples is not working as negative prompt in v4 but in earlier versions.

    Rider_56Dec 30, 2023· 3 reactions
    CivitAI

    yeah v40 is kinda generate high contrast images even negative promt to that from description doesn't work i drop .yaml file with original checkpoint and it doesnt help

    kikafe7870707Dec 31, 2023
    CivitAI

    V4.0 is a step backwards. The merger with easyfluff improves anatomy in most cases, but also homogenizes many art styles, and makes it more difficult to remove that horrible greyscale shading that plagues other popular furry models (especially easyfluff). It also makes poses much too consistent, and it's difficult to get variety as every image comes out as one of like 20 common poses.

    nunyabusinessDec 31, 2023

    In other words - you were relying on model artifacting to get randomness and vpred has exposed the flaw in that practice?

    Not sure what you're talking about with style homogenization or greyscale output though. You're not using a terrible, giant negative are you?

    MobbunDec 31, 2023

    @nunyabusiness There is a certain brown tint that's apparent on EasyFluff and likely yiffy4.0. It's due to the fact that it's a vpred model merged with eps. I don't think pure vpred models have these problems. I'm not sure how 4.0 was merged, but it may have exacerbated the brownout.

    terencedrew169Jan 1, 2024· 1 reaction

    Ive come to the same conclusions. The washed-out palette of clothing was especially noticeable. v40 is not an improvement, nor a step in a different direction, but a step off a cliff.

    dex_dungeonJan 4, 2024

    The point of an AI generator is to work alongside a random seed to generate a variety of images. If you plug in 'running' as a pose and aren't getting a variety of running poses that's as broad as reasonably expected from the training data, that is an issue. Especially when the tagging of the training data isn't robust enough to deliberately summon up the edge-case poses.

    Argon42Jan 8, 2024· 1 reaction

    I'm also not sure what to think of v40.

    People said yiffymix doesn't follow the prompt accurately and is too random. But I don't actually mind. I like using a lot of wildcards for backgrounds, poses etc and have yiffymix be "creative" and flesh it all out with details that fit into the picture. It's amazing what kind of detailed and coherent environments you get just by prompting "jules verne" for example. Always fun to make a few hundreds of gens at a time and then see what the model came up with while browsing through the batch. And the results were always bright, saturated, detailed and varied, the faces and bodies tended to be beautiful, even without any artist tags.

    Now everything looks very similar and overly dark. Backgrounds are simple or missing entirely. Some tags that didn't seem to do anything at all before seem to be more effective now. That's true.

    I don't want to pass a final judgement here, but at this point I don't really know why I shouldn't use fluffyrock instead. They seem more or less the same now.

    KorporateRaiderJan 1, 2024
    CivitAI

    I've been using v36 and can't for the life of me get the lighting sorted out. Everything is always aggressively backlit (or lit from top left corner) with deep shadows on the front of the body to the point where it's hard to discern any details. I've tried using prompts from brighter images without much luck. Anyone have any solid advice on how to brighten up my subjects?

    KorporateRaiderJan 1, 2024

    Also, I've gone through the various tags and think I've found that the tag species I'm using (panthera) is excessively dark looking from the front, and is primarily rim-lit, and top-lit, which washes out pretty much any image since I can't seem to get a source of light from the front for illuniation

    loppirrJan 6, 2024

    Have you tried the vae in the description? That mostly fixed it for me

    KorporateRaiderJan 8, 2024

    @loppirr yes, it did help, and I found using CFG Remix made a major improvement as well!

    DiffussyJan 3, 2024· 1 reaction
    CivitAI

    Anyway you could put the LoRA collection on anything but MEGA? Constant daily limit errors.

    fightingblaze77Jan 4, 2024· 3 reactions
    CivitAI

    Horribly broken, everything looks like static and filmy with a vague idea of what my prompt is.

    kmttJan 4, 2024· 7 reactions

    Have you followed the (required) instructions for running this model? Especially needed for the latest v40, where you need extra stuff since it's a v-pred model.

    A lot of people, including myself, are getting amazing results, so maybe check your setup.

    MobbunJan 6, 2024· 5 reactions
    CivitAI

    For ComfyUI people, no need to bother with config file, there's a node you can use to set it to v-prediction.

    advanced > model sampling discrete node

    STEM_GambleJan 7, 2024

    Can confirm, added that between Load Checkpoint and sampler, Just Works (tm)

    yLoraJan 7, 2024
    CivitAI

    Hi, is there a beta tester program for this model? I'd love to be part of tests for this or to have access to additional versions of the model if there are such. Thanks

    solvenn7796Jan 7, 2024· 13 reactions
    CivitAI

    Does not work at all, generates only black pictures with small color noise

    widic47081Jan 7, 2024

    same

    kmttJan 7, 2024· 4 reactions

    @widic47081 @solvenn7796 Have you both read and properly followed the instructions and recommendations of this version before claiming it "doesn't work"?

    v40 is a v-pred model, you need the config file otherwise you'll get noise. CFG Rescale is also pretty useful.

    derpington09495Jan 11, 2024· 2 reactions

    I'm also having issues. I placed the 'config file' next to the checkpoint? I think?

    3296396Jan 9, 2024· 1 reaction
    CivitAI

    RIP

    TalbocJan 9, 2024· 4 reactions
    CivitAI

    Hate to say it but v40 is just not a good model. With earlier versions when you put in a prompt, you could get a wide variety of images out of it. The AI had some leeway to be creative. Now it all comes out looking like the same image, with minor tweaks, as if you were just re-running the same seed over and over. Whatever change you made for this version needs a rethink. And I guess I'm sticking with v34 a little bit longer. Sorry to have to be that critical, but this one's just not working out.

    anon45933Jan 9, 2024· 10 reactions
    CivitAI

    Wow, v40 is such a step up from the last. Great work, this is probably the best furry model out there right now!

    InvictusAIJan 10, 2024· 1 reaction
    CivitAI

    Great model, insanely flexible in skilled hands and very predictable at the same time. Follows prompts to the letter. Any chance you'll make a LCM model in the future? It would be great to use this for animations.

    IdanisJan 10, 2024· 3 reactions
    CivitAI

    I just wanted to add to the voices who are realizing the potential of v40, the hands are just incredible now and obeys prompts much easier, yes it is dark but thats fine, i used a brightness lora and I'm happy with the results, I am looking forward to some future releases however that will get the colors closer to v36, keep up the incredible work!

    paulosergiocardozoJan 11, 2024
    CivitAI

    Is it possible to launch a version of your model without needing external files? I use your model on sites like " Seaart.ai " and " Tensor.art " they don't support any type of files that have to be added externally. Thank you for your attention.

    chilon249
    Author
    Jan 12, 2024· 2 reactions

    All SD models need "yaml" config, You can found in SD-WebUI/configs

    Default config use v1-inference.yaml (SD1.x model)

    But "yaml" file can't baked into model like VAE

    Mostly online generator only support SD1.x or SDXL (non v-pred)

    menatomboJan 12, 2024· 2 reactions
    CivitAI

    This is by far a great model, but like all models that try to merge loras in to the model they suffer from "same" syndrome. This means that if you put 1boy and 1girl, and loona or master tigress and tailung, you get two of one fur instead of two separate furs. Like this prompt "1boy, 1girl, Tai Lung, master tigress" It will either give you two snow leopards that look like tai lung or two tigers that look like tigress. Putting in () around them does absolutely nothing. Now, take this prompt for example "1boy, 1girl, ((fox)), ((wolf))" It will give you two different furs. That's why it is super sad when people merge LoRa's in to their models. (It plagues LLM models and it really kills stable diffusion models.)

    C1ydeJan 17, 2024

    This is an issue for many, many models. Try using the regional prompter extension if you're making an image with two distinct characters

    sansmiaJan 17, 2024

    @C1yde To add to that suggestion, also try defining your characters in separate areas of the prompt. I find if I define multiple characters at different places, it more easily separates them instead of merging the concepts together.

    118121Feb 3, 2024

    @sansmia Could you give some prompting examples?

    menatomboFeb 11, 2024

    @C1yde @sansmia This wouldn't be an issue, but if you want the characters to hug or kiss. This becomes an issue. I've tried telling it Right and Left. Front and Back. Even tried controlnet to get more control over the models. The only thing that seems to work is controlnet and then make your character in the area using masking. Then make the other by masking out another area. It would be awesome if it could just differentiate between the fact that there are two different characters. But once it sees one Lora - addled (addled like a sickness) prompt.. That the end of it. It stops listening and now I have 10 Nick Wildes or 10 Judy Hopps. This is the only reason I can find that Loras just aren't great. They allow someone to use 1/3 of the computing power to make, and less pictures, but then you get a partial model that is good at one thing and doesn't like sharing. :D I'll test out both of your suggestions, because it seems interesting. I would love to actually be able to tame some of these crazy Lora addled checkpoints.

    YoshbuttLovahJan 13, 2024· 1 reaction
    CivitAI

    Hey just wanted to post a honest comment. Love this model to death. Its so flexible and now even more so. Thank you for all the training and work youve done but all i ask is if you could find a way to clean up the yoshis just a bit more! takes a while to get a yoshi without weird ear spikes and such. other then that. perfect model. please keep improving!!!

    codysydney379Jan 14, 2024
    CivitAI

    An excellent model to have as an option when trying to create something specific. Doesn't seem compatible with Tensorrt though, possibly it's the v-pred?

    yosh599Jan 24, 2024

    It seems to work fine using TensorRT for me in version v40.

    codysydney379Feb 13, 2024

    @yosh599 Yep me too, tensorrt is just suuuuper picky

    racetik305623Jan 17, 2024
    CivitAI

    Trying to rewritre an existed prompt, but failing again and again at one part: Nudity and NSFW.|
    Nobody know how to fix it?
    Im trying to fixing it through adding a Negative prompt, cleaning all questionable prompt words. Still the same - if the art which prompt i took have a nudity, its magically appears on mine

    SulfurFurJan 18, 2024

    It might help if you provide the prompt or even better the complete meta data (like seed etc.) Without it's hard to give some helpful advice besides the usual "put nude, nsfw, etc." in the negative prompt...

    dex_dungeonJan 19, 2024· 6 reactions
    CivitAI

    It seems to be specific artists that are cursed with the dark tinge. For example, Personalami give the whole picture a dark purple tone.

    furryartfan2002261Jan 20, 2024· 5 reactions
    CivitAI

    Amazon model for sure! I did not have much in the way of success trying the model on Draw Things (using a Mac Mini M2 Pro). However, when I moved over to trying it on A1111, the turn around for even one image (512x512) is currently looking to be about 25 minutes!!! Is that for real? I have not used SD/A1111 that much, but this seems almost unusable. The machine has 32G of RAM, so memory should not be the issue. Any suggestions?

    PapaPapaJan 24, 2024

    What's your computer's GPU? 25 minutes sounds like it's running on CPU.

    sansmiaJan 24, 2024· 1 reaction

    @PapaPapa He has a Mac Mini so he has no gpu outside of the integrated one Apple built into their M2 processor. Macs are a pretty bad time for this sort of thing unfortunately. The GPU is the powerhouse of the whole thing, and without one, crunching on a CPU takes forever.

    sleembokerJan 22, 2024
    CivitAI

    (Yiffmix 40)

    There is yellowing in the images, practically at any prompt. How can I get rid of it?

    PapaPapaJan 24, 2024

    It might be a VAE issue, or maybe you need to add the CFG rescale extension.

    216396Jan 24, 2024

    @PapaPapa How do I add the CFG Rescale Extension?

    PapaPapaJan 25, 2024

    @Steeltron2000 It should be under the extensions on A1111. https://github.com/Seshelle/CFG_Rescale_webui

    That one

    PapaPapaJan 25, 2024· 2 reactions

    @Steeltron2000 The extension can be installed from the Extensions tab by copying the repository link into the Install from URL section. A CFG Rescale value of 0.7 is recommended by the creator of the extension themself. The CFG Rescale slider will be below your generation parameters and above the scripts section when installed.

    216396Jan 25, 2024

    @PapaPapa Ok How do I download it?

    mblwuJan 26, 2024· 2 reactions

    either can be negative embedding issue, like badyiffymix applies red tone almost everywhere, or boring_e621 applies yellow-green to many styles

    216396Jan 26, 2024

    @silkytail113 Ok

    MobbunJan 29, 2024

    It's an issue with merging vpred and eps models. You get brownout and the yellow tint. It's also apparent on Easyfluff, though it's less severe there.

    Checkpoint
    SD 1.5

    Details

    Downloads
    8,214
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/25/2023
    Updated
    5/14/2026
    Deleted
    -

    Files

    yiffymix_v40.safetensors

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)

    yiffymix_v40.ckpt

    Mirrors

    CivitAI (1 mirrors)

    yiffymix_v40.ckpt

    Mirrors

    CivitAI (1 mirrors)