CivArchive
    Preview 112019279
    Preview 112036207
    Preview 112025510
    Preview 112040301
    Preview 112027267
    Preview 112024980
    Preview 112028923
    Preview 112153554
    Preview 112045287
    Preview 112047566
    Preview 112145279
    Preview 112146994
    Preview 112148415
    Preview 112148865

    A completely new approach!

    The original Snakebite was an Illustrious model injected with bigASP's compositional blocks. Snakebite 2.0, however, is primarily bigASP - but enhanced with a number of techniques to dramatically improve its textures and aesthetic capabilities.

    ❤️ If you like Snakebite, you can help offset the cost of training:

    Buy liftweights a Coffee


    ⚠️ IMPORTANT:

    This model uses Flow Matching, so you must connect it to the ModelSamplingSD3 node in ComfyUI to get correct results.

    If you're using a different UI that does not support the Flow Matching objective, you can try Snakebite v1.4 instead - that one behaves like a regular Illustrious model.


    Why change the formula?

    While I'm happy with the original Snakebite, there are some "gaps" between the two architectures I haven't been able to close with merging. Over the course of 1.0 through 1.4, I did what I could to minimize weird background objects and extra limbs, but it occurred to me that the perfect solution is already right here, in the form of vanilla bigASP 2.5.

    I don't know if many people realize how good bigASP is... the prompt adherence is almost Flux-level with none of the censorship, plastic skin, steep hardware requirements, or bad licensing. It's pretty remarkable.

    I set out to solve two of its main problems:

    1. bigASP's textures are straight-up scuffed. I don't know if there was an issue with its aesthetic captioning or if it's simply "seen too much" (it was trained on 13 million images!), but no amount of (((high quality, masterpiece, so good))) is going to produce an image that looks even half as good as that of your average SDXL model.

    2. You need to prompt it for everything. This is not necessarily a bad thing. Problem is, bigASP has some very weird ideas about the stuff you fail to mention. For example, if you ask for 1girl, standing it might give you a picture of 1girl, standing, morbidly obese, upside down.

    Both of these problems have been addressed, at least to an extent. It wasn't easy! bigASP's input blocks are really delicate - if you try massaging them with aesthetic LoRAs, the model tends to fall apart completely. Compatibility with SDXL LoRAs is poor, because they were not trained with the Flow Matching technique and bigASP's CLIP is very different.

    Still, I found some blocks that responded well to my cosmetic upgrades. So I have been slowly and carefully introducing these blocks to things like Direct Preference Optimization with the goal of helping bigASP know what to do when you don't provide a 500-word prompt (i.e. make every picture look decent and not insane)


    👍 Advantages over v1

    1. Prompt adherence is UH-MAZING for an SDXL model - check demo gallery

    2. Understands more complex concepts and interactions

    3. Mangled limbs are almost nonexistent thanks to Flow Matching

    4. Very flexible with styles; more photorealistic than v1 while also more capable of generating illustrations

    5. It can spell words pretty well, provided you don't mind re-rolling a few times

    👎 Disadvantages

    1. Aesthetically, it's not as consistent as IL - but it's getting close in newer versions (v2.3)

    2. The lack of IL means booru tag knowledge is worse, but you might be surprised at how much bigASP knows... it can generate tons of mainstream characters and concepts just fine on its own


    Turbo:

    • 6-9 steps

    • LCM sampler

    • Beta, normal, or simple scheduler

    • CFG 1

    • Model shift of 3 (this is the value that bigASP was trained on) or 6 (allegedly even better according to bigASP's author)

    • Sample workflow: https://pastebin.com/Z35kNns6

    Full:

    • 25-40 steps

    • Euler ancestral sampler for speed, dpmpp_2s_ancestral for quality

    • Simple scheduler

    • CFG 4-6

    • Model shift of 3

    • Negative prompt strongly recommended (e.g. worst quality)

    • Sample workflow: https://pastebin.com/ynrJ1Nt2

    Note: increasing the model shift may improve prompt adherence at the cost of quality. This is particularly useful with character LoRAs. Try a value between 6-8.


    📖 Prompting Guide

    The #1 thing is, be careful with your fluff. If you ask for warm lighting, you better believe you're gonna get warm lighting. Like, a lot of it. Even adding a simple high quality to your prompt might change your image completely. So be deliberate. Start with zero fluff.

    The effect is not always intuitive. For example--as the author of bigASP has pointed out--the term masterpiece quality "causes the model to tend toward producing illustrations/drawings instead of photo."

    If it's photos you want, I've yet to find phrases that work better than onlyfans, abbywinters photo. Hey now, I'm being serious! These terms work great for innocent stuff, too. (EDIT: As of v2.2, these helper phrases are optional. Using photograph of a... is usually enough in newer versions of Snakebite.)

    Also, bigASP's training data was captioned with JoyCaption (online demo here, made by same author as bigASP) so you should try speaking to the model in a similar cadence and tone as JoyCaption does. Booru tags work okay too, but they tend to push images in more of a CGI direction.

    Most of the time, if Snakebite is not giving you the image you want, it's a matter of finding another phrasing or adding (((emphasis))).


    🏋️‍♂️ Training LoRAs

    Option A


    There is an official LoRA training script for bigASP 2.5 available here:

    It's easy to install. I'm running it through my kohya-ss venv, as it only required a couple extra (non-conflicting) dependencies. However, it has a limited feature set and has not been thoroughly battle tested.

    The train-lora.py script does not target as many modules as kohya's sd-scripts. This results in much smaller LoRA filesizes, but may prove insufficient for e.g. capturing a character's likeness, even at high rank and alpha. To fix this, search for "target_modules" in the script and update accordingly:

    target_modules=["to_k", "to_q", "to_v", "to_out.0", "k_proj", "v_proj", "q_proj", "out_proj", "proj_in", "proj_out", "conv_in", "conv_out", "ff.net.0.proj", "ff.net.2"]

    That should produce a file equal in size to that of kohya (at fp16 precision.)

    Default settings are good. You can increase the lora_rank and lora_alpha if you want, but the default value of 32 is usually fine. It buckets images. Be aware that it only saves a checkpoint at the end of training.

    Don't train on turbo versions of Snakebite. Either use the full version (once I've uploaded it), or train on bigASP 2.5 vanilla.


    Option B

    There is an unofficial fork of sd-scripts that supports Flow Matching, created by @deGENERATIVE_SQUAD :

    This option takes more effort to set up, but it opens up a lot more possibilities for customization. You may need to adjust the code for compatibility with your environment. In my case, I had to remove the --loss_type="fft" parameter and swap out references to transforms.v2 in library/train_util.py with the original code from the sd3 branch.

    Pass the following arguments to sdxl_train_network.py:

    --flow_matching ^

    --flow_matching_objective="vector_field" ^

    --flow_matching_shift=3.0 ^


    Thank you. As always, I look forward to your feedback. Please share the model and upload some images to help it gain traction. It would be amazing if we could make Snakebite eligible for Civitai's onsite generator someday!

    Description

    Numerous small improvements:

    • Adjusted color grading for less red bias.

    • Reworked contrast and exposure; less cinematic/boosted, more lifelike.

    • Slightly more stable backgrounds.

    • Images are slightly sharper.

    • Replaced included VAE with Felldude/SDXL_NaturalSkin_VAE for better skin tones. Note: this is great at decoding images, but bad at encoding images. You might need to load a different VAE for e.g. img2img.

    As a result, 2.4 creates images that are a bit closer to "neutral tone mapping" and should steer more easily with stylistic directives. If you wish to retain the boosted look of 2.3, you can add stuff like vibrant, deep shadows or try a saturation LoRA.

    FAQ

    Comments (8)

    liftweights
    Author
    Dec 18, 2025· 2 reactions
    CivitAI

    My new model is now available! 🎅
    Snakelite v1.0

    ClickbaitSamuraiDec 21, 2025· 1 reaction
    CivitAI

    This is, without a doubt, the BEST realistic model I have ever used locally. Insane! I hope your work continues :)

    liftweights
    Author
    Dec 21, 2025

    Thank you! Yep, more stuff coming soon!

    insist_offingJan 16, 2026· 1 reaction
    CivitAI

    2.3 and 2.4 are massive downgrades from 2.2. I can do SO MANY different styles and subject matters with 2.2 which are completely impossible with the newer versions. The loss of knowledge just isn't worth the improvements in consistency. I feel like Snakebite has completely lost the one thing that made it stand out from other models.

    XpomulCiviJan 18, 2026

    A few things that helped me alot with this: Vpred fix in the negative, at around -0.5 to -0.8 seems to keep realism even with less trained concepts.

    Euler a sampler for better backgrounds and colors (yes it works, but it tends to do anatomy worse).

    Lying Sigma sampler from Daemon Detailer, at positive strengths (less detail), for initial generation. This seems to help reduce background artifacting.

    You can also use the Touch of Realism lora for consistency.

    XpomulCiviFeb 23, 2026
    CivitAI

    Do you know if this model works with Fooocus Inpaint? The same workflow used in ai diffusion + SD3 Sampling returns only noise. Other Turbo models like IntoRealism work fine

    liftweights
    Author
    Feb 24, 2026

    Not sure - if Fooocus supports flow matching, it should work. Snakebite 2 and Snakelite require a model shift of 3+ or you'll get noise.

    XpomulCiviApr 29, 2026
    CivitAI

    Hello, me again with a probably silly question.

    Do you think there is a way to push Snakebite or -lite towards the color fidelity of models like Cyberrealistic?

    Ive tried a whole bunch of approaches like loras, prompting, and even merging the OUT blocks of Cyberrealistic (which of course didnt work but worth a shot).

    I personally think that CR has the best looking colors of SDXL models, and while Snakebite has alot of advantages, I find it to be a bit more limited in that regard, often featuring sepia like tints.

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    681
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/7/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    snakebite2_v24Turbo.safetensors

    Mirrors

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.