CivArchive
    Preview 108122763
    Preview 108123855
    Preview 108122692
    Preview 108125049
    Preview 108125350
    Preview 108123713
    Preview 108124978
    Preview 108123956
    Preview 108126812
    Preview 108124543
    Preview 108213599
    Preview 108214370
    Preview 108382276
    Preview 108383531
    Preview 108384839

    A completely new approach!

    The original Snakebite was an Illustrious model injected with bigASP's compositional blocks. Snakebite 2.0, however, is primarily bigASP - but enhanced with a number of techniques to dramatically improve its textures and aesthetic capabilities.

    ❤️ If you like Snakebite, you can help offset the cost of training:

    Buy liftweights a Coffee


    ⚠️ IMPORTANT:

    This model uses Flow Matching, so you must connect it to the ModelSamplingSD3 node in ComfyUI to get correct results.

    If you're using a different UI that does not support the Flow Matching objective, you can try Snakebite v1.4 instead - that one behaves like a regular Illustrious model.


    Why change the formula?

    While I'm happy with the original Snakebite, there are some "gaps" between the two architectures I haven't been able to close with merging. Over the course of 1.0 through 1.4, I did what I could to minimize weird background objects and extra limbs, but it occurred to me that the perfect solution is already right here, in the form of vanilla bigASP 2.5.

    I don't know if many people realize how good bigASP is... the prompt adherence is almost Flux-level with none of the censorship, plastic skin, steep hardware requirements, or bad licensing. It's pretty remarkable.

    I set out to solve two of its main problems:

    1. bigASP's textures are straight-up scuffed. I don't know if there was an issue with its aesthetic captioning or if it's simply "seen too much" (it was trained on 13 million images!), but no amount of (((high quality, masterpiece, so good))) is going to produce an image that looks even half as good as that of your average SDXL model.

    2. You need to prompt it for everything. This is not necessarily a bad thing. Problem is, bigASP has some very weird ideas about the stuff you fail to mention. For example, if you ask for 1girl, standing it might give you a picture of 1girl, standing, morbidly obese, upside down.

    Both of these problems have been addressed, at least to an extent. It wasn't easy! bigASP's input blocks are really delicate - if you try massaging them with aesthetic LoRAs, the model tends to fall apart completely. Compatibility with SDXL LoRAs is poor, because they were not trained with the Flow Matching technique and bigASP's CLIP is very different.

    Still, I found some blocks that responded well to my cosmetic upgrades. So I have been slowly and carefully introducing these blocks to things like Direct Preference Optimization with the goal of helping bigASP know what to do when you don't provide a 500-word prompt (i.e. make every picture look decent and not insane)


    👍 Advantages over v1

    1. Prompt adherence is UH-MAZING for an SDXL model - check demo gallery

    2. Understands more complex concepts and interactions

    3. Mangled limbs are almost nonexistent thanks to Flow Matching

    4. Very flexible with styles; more photorealistic than v1 while also more capable of generating illustrations

    5. It can spell words pretty well, provided you don't mind re-rolling a few times

    👎 Disadvantages

    1. Aesthetically, it's not as consistent as IL - but it's getting close in newer versions (v2.3)

    2. The lack of IL means booru tag knowledge is worse, but you might be surprised at how much bigASP knows... it can generate tons of mainstream characters and concepts just fine on its own


    Turbo:

    • 6-9 steps

    • LCM sampler

    • Beta, normal, or simple scheduler

    • CFG 1

    • Model shift of 3 (this is the value that bigASP was trained on) or 6 (allegedly even better according to bigASP's author)

    • Sample workflow: https://pastebin.com/Z35kNns6

    Full:

    • 25-40 steps

    • Euler ancestral sampler for speed, dpmpp_2s_ancestral for quality

    • Simple scheduler

    • CFG 4-6

    • Model shift of 3

    • Negative prompt strongly recommended (e.g. worst quality)

    • Sample workflow: https://pastebin.com/ynrJ1Nt2

    Note: increasing the model shift may improve prompt adherence at the cost of quality. This is particularly useful with character LoRAs. Try a value between 6-8.


    📖 Prompting Guide

    The #1 thing is, be careful with your fluff. If you ask for warm lighting, you better believe you're gonna get warm lighting. Like, a lot of it. Even adding a simple high quality to your prompt might change your image completely. So be deliberate. Start with zero fluff.

    The effect is not always intuitive. For example--as the author of bigASP has pointed out--the term masterpiece quality "causes the model to tend toward producing illustrations/drawings instead of photo."

    If it's photos you want, I've yet to find phrases that work better than onlyfans, abbywinters photo. Hey now, I'm being serious! These terms work great for innocent stuff, too. (EDIT: As of v2.2, these helper phrases are optional. Using photograph of a... is usually enough in newer versions of Snakebite.)

    Also, bigASP's training data was captioned with JoyCaption (online demo here, made by same author as bigASP) so you should try speaking to the model in a similar cadence and tone as JoyCaption does. Booru tags work okay too, but they tend to push images in more of a CGI direction.

    Most of the time, if Snakebite is not giving you the image you want, it's a matter of finding another phrasing or adding (((emphasis))).


    🏋️‍♂️ Training LoRAs

    Option A


    There is an official LoRA training script for bigASP 2.5 available here:

    It's easy to install. I'm running it through my kohya-ss venv, as it only required a couple extra (non-conflicting) dependencies. However, it has a limited feature set and has not been thoroughly battle tested.

    The train-lora.py script does not target as many modules as kohya's sd-scripts. This results in much smaller LoRA filesizes, but may prove insufficient for e.g. capturing a character's likeness, even at high rank and alpha. To fix this, search for "target_modules" in the script and update accordingly:

    target_modules=["to_k", "to_q", "to_v", "to_out.0", "k_proj", "v_proj", "q_proj", "out_proj", "proj_in", "proj_out", "conv_in", "conv_out", "ff.net.0.proj", "ff.net.2"]

    That should produce a file equal in size to that of kohya (at fp16 precision.)

    Default settings are good. You can increase the lora_rank and lora_alpha if you want, but the default value of 32 is usually fine. It buckets images. Be aware that it only saves a checkpoint at the end of training.

    Don't train on turbo versions of Snakebite. Either use the full version (once I've uploaded it), or train on bigASP 2.5 vanilla.


    Option B

    There is an unofficial fork of sd-scripts that supports Flow Matching, created by @deGENERATIVE_SQUAD :

    This option takes more effort to set up, but it opens up a lot more possibilities for customization. You may need to adjust the code for compatibility with your environment. In my case, I had to remove the --loss_type="fft" parameter and swap out references to transforms.v2 in library/train_util.py with the original code from the sd3 branch.

    Pass the following arguments to sdxl_train_network.py:

    --flow_matching ^

    --flow_matching_objective="vector_field" ^

    --flow_matching_shift=3.0 ^


    Thank you. As always, I look forward to your feedback. Please share the model and upload some images to help it gain traction. It would be amazing if we could make Snakebite eligible for Civitai's onsite generator someday!

    Description

    2.2 was finetuned on around 700 high-quality photographs to bring the model's baseline style much closer to that of real life. It also has better anatomy and more stable backgrounds.

    Training details:

    • Used Flow Matching objective for maximum quality.

    • Photos were selected for unique backdrops, little to no grain/artifacts, strong composition, and subject diversity.

    • Plenty of NSFW material.

    • All watermarks removed.

    • Other styles--such as "lofi" or "cinematic"--were excluded from the training set, in order to establish the baseline style.

    • Trained for 20k steps, then pruned a few overcooked blocks.

    • Captions generated with JoyCaption as it's what bigASP knows best.

    Takeaways:

    • Terms like onlyfans or photorealistic are optional in 2.2. Using photo of a... is usually enough.

    • I'm using a quantized version of JoyCaption Beta One that runs on my local hardware, and it's not perfect. It often goes into detail about watermarks that don't exist. I was worried this would harm the model's ability to spell words, but that didn't seem to happen. If anything, Snakebite 2.2 might be a little better at text than ever before. No idea why.

    • Despite the carefully cultivated dataset, some image artifacts are present in 2.2. Based on my experience, increasing the dataset to 1-2k images could alleviate this. Improving the captions would also help.

    • 2.2 likes adding some soft focus/depth of field by default. It's a bit difficult to steer away from. Adding more "sharp focus" photography to the training set would help.

    • For me, Snakebite 2.2 represents peak photorealism in SDXL, especially when you factor in the prompt adherence. But if you like making other types of images, you may find that 2.1 is more appropriate for the task.

    Have fun!

    FAQ

    Comments (18)

    OlbanetsOct 29, 2025· 1 reaction
    CivitAI
    NobooOct 29, 2025
    CivitAI

    Maybe it's just me doing something wrong but when using 2.1(non-turbo) with the values suggested(or any other for that matter) I just get random splotches of color.

    liftweights
    Author
    Oct 29, 2025

    Are you using ModelSamplingSD3?

    NobooOct 31, 2025

    Nevermind I'm retarded, yeah I'm using ModelSamplingSD3 but I've used the wrong scheduler the whole time, it's working now.

    liftweights
    Author
    Oct 31, 2025

    @Noboo It's okay, we're all a little retarded sometimes 🙃Enjoy the model!

    Kitten123Oct 30, 2025
    CivitAI

    can you add non turbo for the 2.0 version

    Kitten123Oct 30, 2025
    CivitAI

    i mean 2.2 version

    liftweights
    Author
    Oct 30, 2025· 1 reaction

    Yes, I'm preparing for the release of 2.2 Full. It should be available in a day or two 🙂

    ElectricDreamsOct 30, 2025· 2 reactions
    CivitAI

    I hope you succeed in your vision and we can push forward the XL architecture. Keep up the great work.

    liftweights
    Author
    Oct 30, 2025· 1 reaction

    Thank you! Long live SDXL!

    XpomulCiviOct 30, 2025
    CivitAI

    Which Controlnet (e.g. Scribble or sketch) is recommended for this, and do they even work?

    liftweights
    Author
    Oct 30, 2025· 1 reaction

    Hi, I'm a fan of xinsir's ProMax Union model:

    https://huggingface.co/xinsir/controlnet-union-sdxl-1.0

    I haven't tested it much with Snakebite yet, but I suspect compatibility is somewhere between "decent" to "very good." You can let me know if you try it.

    XpomulCiviOct 30, 2025

    @liftweights Ty, Ill try it out - but Im really new to ComfyUI and I use it with Krita AI diffusion, so it might just be me not knowing how to properly set everything up, so far the results with other Control nets were not good to say the least

    XpomulCiviOct 30, 2025· 1 reaction

    Update, can confirm it works nicely

    XpomulCiviOct 30, 2025
    CivitAI

    Next thing Im trying to do is using this for img2img refinement in Krita, which I cant manage yet. Are there some good img2img workflows I can try to use as a base to add the Krita nodes?

    Selections and using input are a real hassle :|

    liftweights
    Author
    Oct 31, 2025· 1 reaction

    Sorry, I'm unfamiliar with Krita. I'm working on an all-in-one ComfyUI workflow that includes img2img upscaling. It's not ready yet, but here are some key nodes & settings you may want to look into:

    - upscale original image 2x with traditional algorithm like lanczos and apply a little sharpening
    - 50% denoise with kl_optimal scheduler
    - You can use about half as many steps as you used to generate the original image, e.g. an 8-step turbo image can be upscaled in about 4 steps
    - Use the original prompt for img2img upscaling, perhaps adding a few more quality terms like (film grain, blurry, compression artifacts, JPEG artifacts, worst quality:-1), (too many fingers:-1.5)

    - Use Tiled Diffusion on "Mixture of Diffusers" mode to prevent anatomy problems

    - Use Detail Daemon Sampler to squeeze out as much detail as possible

    Hope that helps.

    XpomulCiviNov 1, 2025· 1 reaction

    @liftweights Ive managed to cobble together an inpainting Workflow, and it works quite well. Id say your model is absolutely the best SDXL model Ive used for the sheer versatility quality and prompt adherence, nice work

    liftweights
    Author
    Nov 1, 2025

    @XpomulCivi Thank you so much!

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    557
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/29/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    snakebite2_v22Turbo.safetensors

    Mirrors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.