A completely new approach!
The original Snakebite was an Illustrious model injected with bigASP's compositional blocks. Snakebite 2.0, however, is primarily bigASP - but enhanced with a number of techniques to dramatically improve its textures and aesthetic capabilities.
❤️ If you like Snakebite, you can help offset the cost of training:
⚠️ IMPORTANT:
This model uses Flow Matching, so you must connect it to the ModelSamplingSD3 node in ComfyUI to get correct results.
If you're using a different UI that does not support the Flow Matching objective, you can try Snakebite v1.4 instead - that one behaves like a regular Illustrious model.
Why change the formula?
While I'm happy with the original Snakebite, there are some "gaps" between the two architectures I haven't been able to close with merging. Over the course of 1.0 through 1.4, I did what I could to minimize weird background objects and extra limbs, but it occurred to me that the perfect solution is already right here, in the form of vanilla bigASP 2.5.
I don't know if many people realize how good bigASP is... the prompt adherence is almost Flux-level with none of the censorship, plastic skin, steep hardware requirements, or bad licensing. It's pretty remarkable.
I set out to solve two of its main problems:
1. bigASP's textures are straight-up scuffed. I don't know if there was an issue with its aesthetic captioning or if it's simply "seen too much" (it was trained on 13 million images!), but no amount of (((high quality, masterpiece, so good))) is going to produce an image that looks even half as good as that of your average SDXL model.
2. You need to prompt it for everything. This is not necessarily a bad thing. Problem is, bigASP has some very weird ideas about the stuff you fail to mention. For example, if you ask for 1girl, standing it might give you a picture of 1girl, standing, morbidly obese, upside down.
Both of these problems have been addressed, at least to an extent. It wasn't easy! bigASP's input blocks are really delicate - if you try massaging them with aesthetic LoRAs, the model tends to fall apart completely. Compatibility with SDXL LoRAs is poor, because they were not trained with the Flow Matching technique and bigASP's CLIP is very different.
Still, I found some blocks that responded well to my cosmetic upgrades. So I have been slowly and carefully introducing these blocks to things like Direct Preference Optimization with the goal of helping bigASP know what to do when you don't provide a 500-word prompt (i.e. make every picture look decent and not insane)
👍 Advantages over v1
1. Prompt adherence is UH-MAZING for an SDXL model - check demo gallery
2. Understands more complex concepts and interactions
3. Mangled limbs are almost nonexistent thanks to Flow Matching
4. Very flexible with styles; more photorealistic than v1 while also more capable of generating illustrations
5. It can spell words pretty well, provided you don't mind re-rolling a few times
👎 Disadvantages
1. Aesthetically, it's not as consistent as IL - but it's getting close in newer versions (v2.3)
2. The lack of IL means booru tag knowledge is worse, but you might be surprised at how much bigASP knows... it can generate tons of mainstream characters and concepts just fine on its own
🛠️ Recommended Settings
Turbo:
6-9 steps
LCM sampler
Beta, normal, or simple scheduler
CFG 1
Model shift of 3 (this is the value that bigASP was trained on) or 6 (allegedly even better according to bigASP's author)
Sample workflow: https://pastebin.com/Z35kNns6
Full:
25-40 steps
Euler ancestral sampler for speed, dpmpp_2s_ancestral for quality
Simple scheduler
CFG 4-6
Model shift of 3
Negative prompt strongly recommended (e.g.
worst quality)Sample workflow: https://pastebin.com/ynrJ1Nt2
Note: increasing the model shift may improve prompt adherence at the cost of quality. This is particularly useful with character LoRAs. Try a value between 6-8.
📖 Prompting Guide
The #1 thing is, be careful with your fluff. If you ask for warm lighting, you better believe you're gonna get warm lighting. Like, a lot of it. Even adding a simple high quality to your prompt might change your image completely. So be deliberate. Start with zero fluff.
The effect is not always intuitive. For example--as the author of bigASP has pointed out--the term masterpiece quality "causes the model to tend toward producing illustrations/drawings instead of photo."
If it's photos you want, I've yet to find phrases that work better than onlyfans, abbywinters photo. Hey now, I'm being serious! These terms work great for innocent stuff, too. (EDIT: As of v2.2, these helper phrases are optional. Using photograph of a... is usually enough in newer versions of Snakebite.)
Also, bigASP's training data was captioned with JoyCaption (online demo here, made by same author as bigASP) so you should try speaking to the model in a similar cadence and tone as JoyCaption does. Booru tags work okay too, but they tend to push images in more of a CGI direction.
Most of the time, if Snakebite is not giving you the image you want, it's a matter of finding another phrasing or adding (((emphasis))).
🏋️♂️ Training LoRAs
Option A
There is an official LoRA training script for bigASP 2.5 available here:
It's easy to install. I'm running it through my kohya-ss venv, as it only required a couple extra (non-conflicting) dependencies. However, it has a limited feature set and has not been thoroughly battle tested.
The train-lora.py script does not target as many modules as kohya's sd-scripts. This results in much smaller LoRA filesizes, but may prove insufficient for e.g. capturing a character's likeness, even at high rank and alpha. To fix this, search for "target_modules" in the script and update accordingly:
target_modules=["to_k", "to_q", "to_v", "to_out.0", "k_proj", "v_proj", "q_proj", "out_proj", "proj_in", "proj_out", "conv_in", "conv_out", "ff.net.0.proj", "ff.net.2"]That should produce a file equal in size to that of kohya (at fp16 precision.)
Default settings are good. You can increase the lora_rank and lora_alpha if you want, but the default value of 32 is usually fine. It buckets images. Be aware that it only saves a checkpoint at the end of training.
Don't train on turbo versions of Snakebite. Either use the full version (once I've uploaded it), or train on bigASP 2.5 vanilla.
Option B
There is an unofficial fork of sd-scripts that supports Flow Matching, created by @deGENERATIVE_SQUAD :
This option takes more effort to set up, but it opens up a lot more possibilities for customization. You may need to adjust the code for compatibility with your environment. In my case, I had to remove the --loss_type="fft" parameter and swap out references to transforms.v2 in library/train_util.py with the original code from the sd3 branch.
Pass the following arguments to sdxl_train_network.py:
--flow_matching ^
--flow_matching_objective="vector_field" ^
--flow_matching_shift=3.0 ^
Thank you. As always, I look forward to your feedback. Please share the model and upload some images to help it gain traction. It would be amazing if we could make Snakebite eligible for Civitai's onsite generator someday!
Description
Initial release. Tell your friends. Recommended for inference, not suitable for finetuning.
FAQ
Comments (27)
Another awesome model and I knew joycaption would come in handy and there's is Joycation gguf on huggingface and I can run in ollama or lmstudio, just in case.
Thanks! Yes, Joycaption is fantastic. I think you can get it down to ~3GB VRAM with the GGUF. It's a good way of generating starting prompts for Snakebite.
Good stuff! Looking forward to playing with it. Will there be an illustrious version? Would be interested if you wrote up how you do your merging or if you have any content I could read.
Thanks - I'm thinking about merging some of IL back in eventually, but right now I want to continue refining bigASP's style. There may be future updates for the 1.x branch of Snakebite, which will remain based on IL.
No writeups yet, but I appreciate the interest 🙂
Training my Lora now, I updated the script with a few QOL items. I was having a blast with your IL even though you said don't go past bigASP res, I was getting amazing results with 1040x1520 - Every now and then it would go haywire though :)
@kiko9 The new model is amazing with character LoRAs. I recommend dropping output.1 (AKA "out2" in Comfy's LoRA Loader Block Weight node) as it seems to write a lot of style-related information in this block that is unnecessary for reproducing a character's likeness.
This is probably the best SDXL I've ever used. It handles prompting well and doesn't seem like a hit and miss with getting porn working.
I will start with , as a noob to this AI generation thingy , I tried this model and first thing I noticed over the other models I tried is that this one actually listen to my prompt 👍, so prompt adhearance is actually good, The challenge i faced with this model is how to come up with a prompt that can generate what i want (Mostly south asian ,indian, lankan woman with hairy bodies 😂), And to tell you I had some success but really struggled to get images without shiny plastic looking skins, the "onlyfans" word in prompt actually helped to overcome that problem but it worked for solos but for sexual acts where two people involves and has natural skins and realistic look, that's one thing I couldn't achieve (I hope I didn't use correct phrases in the prompt ) .... So someone tell me , how can I do that....... So overall it's actually a good checkpoint and I really wanna see more updates in future !!! Keep going, you are awesome 👍 ❤️ Thanks for the time and effort
Onlyfan acts, like a style. There is not much porn on Onlyfan site, so its for solo style. You can prompt amateur style websites with porn as main profile.
https://civitai.com/articles/5864/mining-the-bigasp-caption-database-version-2-update
@aureliocathal937 Oh that's a humongous help ! Thanks ❤️
Thanks for the kind words, @Nekh !
@aureliocathal937 Great resource. It looks like there's also a tag dump for bigASP 2.0 here:
https://gist.github.com/fpgaminer/0243de0d232a90dcae5e2f47d844f9bb
Not sure if every tag will still work, since JoyCaption was used in v2.5. But it doesn't hurt to have more resources.
So will this work in Forge?
Why not
Doesnt work, it needs ModelSamplingSD3 node, comfy only.
@aureliocathal937 thanks for saving a download and time
there has got to be some way to replicate or emulate the shift parameter in forge. alas, i am too stupid to try...
Save with an applied Turbo lora with strength -1 to finetune
"Snakebite is so back" It didn't even leave, the previous version was published a few days ago
seems broken? All i get are blobs of color no matter what setting.
If you've used an older version of the snakebite workflow, you may be experiencing issues with the custom sigma node. Please review your workflow.
@grrrr Yep, thanks for pointing that out. The v1 custom sigmas will not work in v2. Instead, here are some sigmas that can kinda-sorta improve prompt adherence at the cost of a little image quality:
```
15, 0.99, 0.975, 0.8929, 0.7500, 0.6429, 0.5000, 0.3000, 0.0000
```
But I think it's usually best to just roll with the "Beta" scheduler in v2.
And of course, make sure you're using ModelSamplingSD3 @frankmike 🙂
Snakebite 2.1 is back! ;)
Will you be doing any non-turbo versions?
Full version of 2.1 should be available later tonight or tomorrow. WE ARE SO BACK!
is there a non turbo model that will be released soon. I got my own configurations for bigasp 2.5 using the dmd2 lora.
There is now! Also, if your DMD2 configuration is better than mine, please tell me your secrets 🙃
@liftweights dmd2 lora at 100% strength with lcm sampler with 1.0 cfg and 4 steps. I also use a spo 4k lora at 100%












