CivArchive
    Preview 108378821
    Preview 108380198

    A completely new approach!

    The original Snakebite was an Illustrious model injected with bigASP's compositional blocks. Snakebite 2.0, however, is primarily bigASP - but enhanced with a number of techniques to dramatically improve its textures and aesthetic capabilities.

    ❤️ If you like Snakebite, you can help offset the cost of training:

    Buy liftweights a Coffee


    ⚠️ IMPORTANT:

    This model uses Flow Matching, so you must connect it to the ModelSamplingSD3 node in ComfyUI to get correct results.

    If you're using a different UI that does not support the Flow Matching objective, you can try Snakebite v1.4 instead - that one behaves like a regular Illustrious model.


    Why change the formula?

    While I'm happy with the original Snakebite, there are some "gaps" between the two architectures I haven't been able to close with merging. Over the course of 1.0 through 1.4, I did what I could to minimize weird background objects and extra limbs, but it occurred to me that the perfect solution is already right here, in the form of vanilla bigASP 2.5.

    I don't know if many people realize how good bigASP is... the prompt adherence is almost Flux-level with none of the censorship, plastic skin, steep hardware requirements, or bad licensing. It's pretty remarkable.

    I set out to solve two of its main problems:

    1. bigASP's textures are straight-up scuffed. I don't know if there was an issue with its aesthetic captioning or if it's simply "seen too much" (it was trained on 13 million images!), but no amount of (((high quality, masterpiece, so good))) is going to produce an image that looks even half as good as that of your average SDXL model.

    2. You need to prompt it for everything. This is not necessarily a bad thing. Problem is, bigASP has some very weird ideas about the stuff you fail to mention. For example, if you ask for 1girl, standing it might give you a picture of 1girl, standing, morbidly obese, upside down.

    Both of these problems have been addressed, at least to an extent. It wasn't easy! bigASP's input blocks are really delicate - if you try massaging them with aesthetic LoRAs, the model tends to fall apart completely. Compatibility with SDXL LoRAs is poor, because they were not trained with the Flow Matching technique and bigASP's CLIP is very different.

    Still, I found some blocks that responded well to my cosmetic upgrades. So I have been slowly and carefully introducing these blocks to things like Direct Preference Optimization with the goal of helping bigASP know what to do when you don't provide a 500-word prompt (i.e. make every picture look decent and not insane)


    👍 Advantages over v1

    1. Prompt adherence is UH-MAZING for an SDXL model - check demo gallery

    2. Understands more complex concepts and interactions

    3. Mangled limbs are almost nonexistent thanks to Flow Matching

    4. Very flexible with styles; more photorealistic than v1 while also more capable of generating illustrations

    5. It can spell words pretty well, provided you don't mind re-rolling a few times

    👎 Disadvantages

    1. Aesthetically, it's not as consistent as IL - but it's getting close in newer versions (v2.3)

    2. The lack of IL means booru tag knowledge is worse, but you might be surprised at how much bigASP knows... it can generate tons of mainstream characters and concepts just fine on its own


    Turbo:

    • 6-9 steps

    • LCM sampler

    • Beta, normal, or simple scheduler

    • CFG 1

    • Model shift of 3 (this is the value that bigASP was trained on) or 6 (allegedly even better according to bigASP's author)

    • Sample workflow: https://pastebin.com/Z35kNns6

    Full:

    • 25-40 steps

    • Euler ancestral sampler for speed, dpmpp_2s_ancestral for quality

    • Simple scheduler

    • CFG 4-6

    • Model shift of 3

    • Negative prompt strongly recommended (e.g. worst quality)

    • Sample workflow: https://pastebin.com/ynrJ1Nt2

    Note: increasing the model shift may improve prompt adherence at the cost of quality. This is particularly useful with character LoRAs. Try a value between 6-8.


    📖 Prompting Guide

    The #1 thing is, be careful with your fluff. If you ask for warm lighting, you better believe you're gonna get warm lighting. Like, a lot of it. Even adding a simple high quality to your prompt might change your image completely. So be deliberate. Start with zero fluff.

    The effect is not always intuitive. For example--as the author of bigASP has pointed out--the term masterpiece quality "causes the model to tend toward producing illustrations/drawings instead of photo."

    If it's photos you want, I've yet to find phrases that work better than onlyfans, abbywinters photo. Hey now, I'm being serious! These terms work great for innocent stuff, too. (EDIT: As of v2.2, these helper phrases are optional. Using photograph of a... is usually enough in newer versions of Snakebite.)

    Also, bigASP's training data was captioned with JoyCaption (online demo here, made by same author as bigASP) so you should try speaking to the model in a similar cadence and tone as JoyCaption does. Booru tags work okay too, but they tend to push images in more of a CGI direction.

    Most of the time, if Snakebite is not giving you the image you want, it's a matter of finding another phrasing or adding (((emphasis))).


    🏋️‍♂️ Training LoRAs

    Option A


    There is an official LoRA training script for bigASP 2.5 available here:

    It's easy to install. I'm running it through my kohya-ss venv, as it only required a couple extra (non-conflicting) dependencies. However, it has a limited feature set and has not been thoroughly battle tested.

    The train-lora.py script does not target as many modules as kohya's sd-scripts. This results in much smaller LoRA filesizes, but may prove insufficient for e.g. capturing a character's likeness, even at high rank and alpha. To fix this, search for "target_modules" in the script and update accordingly:

    target_modules=["to_k", "to_q", "to_v", "to_out.0", "k_proj", "v_proj", "q_proj", "out_proj", "proj_in", "proj_out", "conv_in", "conv_out", "ff.net.0.proj", "ff.net.2"]

    That should produce a file equal in size to that of kohya (at fp16 precision.)

    Default settings are good. You can increase the lora_rank and lora_alpha if you want, but the default value of 32 is usually fine. It buckets images. Be aware that it only saves a checkpoint at the end of training.

    Don't train on turbo versions of Snakebite. Either use the full version (once I've uploaded it), or train on bigASP 2.5 vanilla.


    Option B

    There is an unofficial fork of sd-scripts that supports Flow Matching, created by @deGENERATIVE_SQUAD :

    This option takes more effort to set up, but it opens up a lot more possibilities for customization. You may need to adjust the code for compatibility with your environment. In my case, I had to remove the --loss_type="fft" parameter and swap out references to transforms.v2 in library/train_util.py with the original code from the sd3 branch.

    Pass the following arguments to sdxl_train_network.py:

    --flow_matching ^

    --flow_matching_objective="vector_field" ^

    --flow_matching_shift=3.0 ^


    Thank you. As always, I look forward to your feedback. Please share the model and upload some images to help it gain traction. It would be amazing if we could make Snakebite eligible for Civitai's onsite generator someday!

    Description

    Full version for finetuning. If you want to use this for inference, you will almost certainly need to apply your own aesthetic and/or accelerator LoRA stack to get decent results.

    FAQ

    Comments (19)

    svvabd323Oct 31, 2025· 3 reactions
    CivitAI

    Great job. Following the prompts is comparable to flux or qwen. This is something of a revolution in SDXL. Thank you. Don't stop.

    pavanijakati83575Nov 1, 2025· 2 reactions
    CivitAI

    Best model.

    frankmikeNov 2, 2025· 1 reaction
    CivitAI

    i still have issues with this, I did the first now the latest and its still a bunch of colorful blobs of color, I use easy diffuser which is a bit dumber than other models but its what I use and I want it to work

    dr4kel4wson5375127Nov 3, 2025· 1 reaction

    "This model uses Flow Matching, so you must connect it to the ModelSamplingSD3 node in ComfyUI to get correct results."

    JBsharpNov 3, 2025· 1 reaction
    CivitAI

    utilizing this with the caption generator from Bigasp. It is wild!!! And the generations are fast. Hands down best one so far!!

    Hi, could you share the procedure, workflow and links.

    JBsharpNov 9, 2025

    @pavanijakati83575 His images have the metadata to pull from.

    liftweights
    Author
    Nov 3, 2025· 10 reactions
    CivitAI

    Hey guys! Thank you for the encouraging words about Snakebite. Here's a quick update on v2.3:

    - I'm ~70% done updating my training data. I'm being extremely selective with what gets in - every picture must satisfy a set of rules regarding composition, use of color, estimated compatibility with JoyCaption, and so on. With these rules in place, only 1 or 2 out of 100 images from high-quality sources qualify. I think approaching the training data this way should help a lot with the consistency of the model. Not to mention, I have "limited slots" due to hardware constraints. I can only afford to train a model on 2-3K images, max. But that's plenty for aesthetic finetuning.

    - v2.3 will be trained on a fork of kohya's sd-scripts that supports Flow Matching. By using sd-scripts, I can take advantage of different schedulers, bucketing rules, and plenty of other features.

    - I'm testing a couple strategies to minimize hallucinations from JoyCaption.

    It may take another week to complete these tasks. Add a few days for the training itself, and then another day or two to finalize the merge recipe. So we're looking at a 2-week ETA for Snakebite v2.3, barring any unforeseen circumstances. Hopefully it's worth the wait!

    XpomulCiviNov 3, 2025· 2 reactions

    sounds great, Im also looking forward for your workflow, it will definitely be better than what my amateur tries yielded :D

    JBsharpNov 5, 2025· 1 reaction

    Perfection takes time. I can't imagine what it will look like compared to the already great results I am getting.

    JBsharpNov 18, 2025

    Excited to see what is to come on the next round. Are you thinking about releasing this soon?

    liftweights
    Author
    Nov 19, 2025· 1 reaction

    @JBsharp Getting close, training is currently at 16 epochs :) Check my latest comment for more info.

    JBsharpNov 19, 2025

    @liftweights Thank you for the update! random question on the Joycaption, are you utilizing Stable Diffusion Prompt as your caption type? Just want to make sure i'm maximizing my prompts.

    liftweights
    Author
    Nov 20, 2025· 1 reaction

    @JBsharp Regarding JoyCaption, the "Stable Diffusion" preset works well if you're using the full model. However, I'm using a quantized GGUF (this one to be precise) with a prompt that's very similar to the "Straightforward" preset. I found that it lowered the hallucination rate by about 10%, at least with my dataset.

    10388254Nov 11, 2025
    CivitAI

    Hey, I was wondering. The one person from the 2.5 thread that figured out how to train it... Am I reading correctly that they did something to "fix" the issues with 2.5, and is that fix available somewhere? I love ba2.5, but it has issues.

    liftweights
    Author
    Nov 18, 2025· 7 reactions
    CivitAI

    Training has officially started on Snakebite v2.3 🙂

    The prep took a bit longer than anticipated - I was able to reduce hallucinations from JoyCaption, but I still needed to clean up many captions by hand. Also, I ran into a few roadblocks while setting up experimental training features (namely PiSSA decomposition and Prodigy Plus ScheduleFree). I think these features are working correctly now, and I'm crossing my fingers that the training parameters are in a reasonable spot.

    If all goes well, the model should be ready this weekend or sometime next week.

    10447678Nov 18, 2025

    Hey, question. So BigAsp2.5 is outstanding, but some "ideas" seem to be present but... damaged. Is there any way to overcome this issue? I keep ending up with like wood veneer or screen door effects and a lot of key parts of gens seem to be blurred out of presence, almost like entire blocks of ideas were whitewashed away. It's really annoying and I know what it CAN do, I just can't get it to do it regularly. Please, any help would be legendary.

    liftweights
    Author
    Nov 19, 2025

    Hey @plshalpagainm8, are you referring to vanilla bigASP or Snakebite?

    My goal with Snakebite is to improve the stability and quality of what bigASP already knows, without damaging other parts of the latent space. So if you found some prompts that work in BA but not in Snakebite, I would love to hear them - I can try to ensure that those prompts will work better in v2.3.

    BA itself is very inconsistent, especially with regard to aesthetics. I'm using a stack of 10 or so LoRAs with a block merging strategy to mitigate these problems.

    10447678Nov 19, 2025

    @liftweights "So if you found some prompts that work in BA but not in Snakebite, I would love to hear them" Ermm... Uh...

    Do you know if the asp fix thing on the the article from the aidegen guy will improve the output with diverse concepts, or was that actually just for training only? I don't understand the need to train a fix for it just to train it, but I'm just a user and not a trainer. I feel like I'm a hair's fart away from greatness with 2.5, it was practically a Godsend, but the randomness and scarcity of the exceptional quality outputs is honestly probably making my addiction worse at this point, so I'm pretty desperate to grab a better version that outputs more reliably and realistically.

    I tried snakebite and so far it seems quite good. It seems to improve the consistency a lot at the cost of that very rare "Holy F" gen that takes you back to the old days of the net (unfortunately, I'm addicted to those and the dopamine they provide). I'm just very tired of having to try to interpret the issues I see on each gen into words and try them as a negative. I must be hundreds of hours into trial and error on 2.5 and its derivatives, and I still can't get consistent results AND occasional "OMG" results. I've gotten some really good results with some negative prompt sets, but it feels like the model had a braindump and I'm just trying to fiddle my way around it by including different concepts and negatives.

    Any tips to overcome the 2.5 issues, loras, training, etc? I'm ok with inconsistency, but the blur/pixelation is really bad with some stuff.

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    209
    Platform
    CivitAI
    Platform Status
    Available
    Created
    10/31/2025
    Updated
    4/30/2026
    Deleted
    -

    Files

    snakebite2_v22.safetensors

    Mirrors

    CivitAI (1 mirrors)

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.