Anima version is available!
Nova Furry XL
A 2d/2.5d furry checkpoint model that can have great details on any type of furs, scales and feathers, which aims to be XL-fied Peaki Furry
Rules
You cannot use the generated images for commercial use if it's not edited (or just turning it to black and white)
Advertising is always welcome
Recommended settings:
Sampler: Euler a for best performance
Steps: 20-50 steps
(Pony)
CFG Scale: 6-7
Prompt: score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_furry, BREAK, furry, anthro, detailed face eyes and fur, {Animal Type}
Negative Prompt: blurry, jpeg artifacts, username, watermark, signature, normal quality, worst quality, large head, low quality, text, error, missing fingers, extra digits, fewer digits, bad eye
(Illustrious)
CFG Scale: 3-5
Prompt: masterpiece, best quality, amazing quality, very aesthetic, high resolution, ultra-detailed, absurdres, newest, scenery, furry, anthro, {Prompt}, BREAK, depth of field, detailed fluffy fur, volumetric lighting
Negative Prompt: human, multiple tails, modern, recent, old, oldest, graphic, cartoon, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured, long body, lowres, bad anatomy, bad hands, missing fingers, extra digits, fewer digits, cropped, very displeasing, (worst quality, bad quality:1.2), bad anatomy, sketch, jpeg artifacts, signature, watermark, username, simple background, conjoined, bad ai-generated
Description
Better prompt adherence and specie response
NoobAI EPS v1.1 + Illustrious v2.0 stable DARE applied
Derived from Illustrious v3.0
FAQ
Comments (71)
This model has more evolutions then Germanies (once again, a compliment)
And just when you thought you had your images perfected. BAM! Looking forward to trying it out.
Can someone explain to me the difference between version 8A and 8B? What does being derived from v3 or v4 do?
A = Reduced Noise, Better Posing
B = Flat Coloring
Sounds like A is the better pick!
I looked down the comments and saw this:
"The result is IL v6B: extend from IL v3.0, which has more flat coloring rather than volumetric coloring
IL v6A is same as IL v5.0 but it has improved posing and reduction in noises
Both of them are good and compatible with Illustrious / NoobAI LoRAs"
@Craz3D Much obliged
Fun fact about Nova Furry Illustrious v8B:
The value of SHA256 is identical to realisticAllMix v1.3 on SD1.5
It's pure coincidence that comes from difference in model size
The chance is 1 / 115792089237316195423570985008687907853269984665640564039457584007913129639936
Which is 2^-256
That's really interesting but it's far more likely there's something else going on. In the nicest possible way, most likely it's your calculations. Second thought is something's up with the sha256 implementation. Otherwise the chances are so absolutely miniscule that I'd safely call it impossible.
@ae4t4ffg There definitely is something wrong with sha256 calculation
The ones on huggingface says that sha256 is 0cf76a2e79a6bf9abba8965c30a3a939bc0f8f3b31c1cf546c2755f974335b83
Which CivitAI says 954588EA7AD3096572B52C66CDD7BE12FD27D6E081756656F09AFFA4DB34C69D
Sha256 on both systems for V8A matches though
When I download 8B with Stability Matrix, it reports a hash validation error.
@Makadhi Try using here instead
https://huggingface.co/Chattiori/ChattioriMixesXL/blob/main/NovaFurryILV8B.safetensors
lmao
When I download 8B from CivitAI, and calculate the sha256 locally, the hash is 0cf76a2e79a6bf9abba8965c30a3a939bc0f8f3b31c1cf546c2755f974335b83, which is different than what is shown on CivitAI but matches the huggingface one. This seems more like a graphical or internal error on CivitAI's part and not a hash collision.
I filed a ticket to CivitAI support and they promptly fixed it. The SHA256 hash should be correct now.
I need to ask. I use Local SD with Reforge, I see you use Euler A sampling, but wich schedule is beter?
Euler A has the best performance compared to other sampling methods
Less noise and great details
@Crody yea use that, but running locally i cant find the best, schedule, i use mostly Karras for example. thank u for answering by the way.
I use Euler A - scheduler: Automatic
At 25-35 steps and CFG rather low around 4. Higher and the colors become to harsh, at least on my older v6 model, will update to v8.
I love image with style furry :D
Hey! What clip text encode node i should use?
Im getting error: KSampler (Efficient)
mat1 and mat2 shapes cannot be multiplied (924x2048 and 768x320)
For a basic node set-up: Load Checkpoint -> CLIP Text Encode (Advanced) w/ A1111 + Empty Latent Image w/ desired resolution -> KSampler (Efficient)
Any kind of other error would be from other nodes conflicting with those, so wouldn't know exactly what else is causing that error as I don't believe CLIP Text Encode should have those errors.
Thanks!
Given the quality and depth and refinement of the model's image dataset, do you even need prompting tags like "masterpiece" or "low quality" in the negative at all? It seems like ALL of the images would or should be pre-filtered by now.
If you wanna see how big of a difference "masterpiece" makes then do try adding "(masterpiece:5)" to your promp and you should see a huge difference in shading and composition. It's more like for best results you should have them but otherwise works fine enough without.
They do still have their uses, even if not necessary. 👍
@ermtest I'm not sure that is a practical or even useful A/B test since anything pushed that hard in weight is going to have an impact on the sampling just from conditioning pressure. I'm talking more intrinsic and pragmatic value of the terms alone. I've done A/B testing dozens of times using different control prompts and I couldn't tell any obvious quality differences. At least not with this model and the size/specificity of the prompts I regularly use.
@Uncanny_Undyne At least from my testing I do see tangible differences when using those terms, just having them at a higher weights lets me see how each of them impact the images at their extremes so I can apply them normally and test for redundancy with a fixed seed. "masterpiece" does seem to have its own latent style from comparing generated with it, without it, and with other terms selectively one by one. It seems that you get the most stable and highest prompt adherence without the prompting tags but using the positive tags can get 'better results' at the cost of being less stable and lower prompt adherence.
@Uncanny_Undyne
Here's a link to a collection of different tests of the positive prompting tags:
https://civitai.com/posts/17929758
It seems "masterpiece" and "best quality" do have notably different effects on the image generation at their extremes, just having them at a normal weight won't have that same extent of the effect but would still obviously lead it closer to those results.
So, unnecessary but useful. Better to have access to them than have nothing, in my opinion at least.
What prompts of enhancement can I use that are most effective? Y’know, like the “Best Quality, Detailed Background” Stuff like that. The model itself is good, but I just can’t find a good combination between Prompts, CFG, and Steps, if anyone out there could give me some tips, I’d be glad to listen. I’m new to this generative stuff, and I’m a cringe mobile user. thanks.
Generally between 20 to 50 steps and CFG around 6 is fine for most models. The most effective prompt enhancers are actually being more specific and detailed with what you want to see rather than most "quality enhancers" general purpose tags. The more the models are familiar and trained well, the more impact your tags and weights become. I recommend just starting as simple as possible for what you want to see and iterating on what comes out, using a fixed seed. Use both your negative and positive prompts frugally and explicitly.
@Uncanny_Undyne alrighty! I’ll be sure to use my microscopic brain to understand this! Thanks.
@Uncanny_Undyne My two cents, CFG of 6 is way too high. Generally you'll want to be at or between a CFG of 3.5 to 4.5. Anything higher then 5 tends to over weight LoRAs and images start to look stiff and less organic.
Nice model but it has become too anime weighted and it's almost completely impossible to get ferals
Try addding furry, anthro to the prompt and human, smooth skin to the negative prompt
What is a feral? Different from furry or anthro?
Is this just not working for anyone else? No model is able to load my generation history?
Hello there. I've been having some issues with the model when running it locally. It's oddly generating white blank images. That process doesn't happen with my other downloaded models
What can it be?
Hello. With this chackpoint and setting picture generating more than 5 minutes. Is it normal for RTX 3070 GPU?
Are you trying to create images greater than 2000 or more in size? Make sure your settings, whether you use the size increase or not, don't go over like 2400 or else generation will take longer.
I have 3070, on the 7b model, I have one image generated in 31 seconds. Euler A, 30 steps, 5 cfg, 1024x1536
I'm getting only black images when generating, struggling to work out why. I've tried varying the settings as much as possible, still no luck. Any ideas?
Do we use danbooru tags or e621 tags for these models?
You can use both but danbooru tags works stronger than e621's
Amazing Model 👏
This is some good stuff. Permission to post some of these to e6AI? If granted, who do I credit?
Tagging with novafurryxl_(model) would be good enough
After simply searching up “nova” on search, it’s honestly wild how many you’ve made, from furries to kemono, to anime and oranges? Props to you for making all of this, lord knows that even if I had the tech you used to make this, I’d barely feel like getting the images TO make it.
All models I made as Nova series are listed in the collection below
@Crody well you did a bloody good job on every last one of em, Have a darn excellent day.
Is there GGUF support? and if not will fp8 exist at some point
Not to bother you or anything, but when’s version 9 coming out? I’m not rushing, just curious. (or are we gonna get 8 and every letter of the alphabet lol)
Almost all of my models update monthly thus this model gets it around the middle of July
@Crody Oh nice, must be a difficult time crunch to do…
This checkpoint works great here on Civitai, but it gives me low quality images when I generate locally on ComfyUI. Any idea why? I'm using the same exact settings
A couple tips could be to use 'CLIP Text Encode (Advanced)' to use A1111 weights which I believe CivitAI uses, and to use recommended supporting tags in the description of this model.
One of the best Models on the site. Thank you
What source of images do you use to make/update this model? I’ve been sorta thinking on trying to make a model. Do I have to get them from other websites?
I merge models: not training, thus no images would be needed for the model creation
It's basically this model + that model all over and adding some LoRAs as well into the merge to stabilize them
The way how I merge and creating images can be found on my article
https://civitai.com/articles/12245/crodys-model-merge-guide-team-c
@Crody Thanks, I’m still an idiot.
@M1Carbinemann Even though now I use mostly Supermerge & untitled merger.
Without his guide and tools(Chattiori Model Merger) I would have never made Banana Splitz XL, iffymix, Yiff in Hell and Stormtooth XL.
So besides just the nova models, he's responsible for influencing quite a bit of other models.
P.S. If you are looking to make a model. Dont start with Chatiori, its pretty advanced.
I would start with something that has a GUI like supermerger. It runs as a tab in A1111/Forge.
Also I highly recommend looking through this.
https://rentry.co/BlockMergeExplain
This explains how the unet works and how block merging works.
Its honestly very simple once you realize how it works.
This and a few other check points have been nothing but a glitchy mess for me for about a week.
This checkpoint makes the most beautiful 2D/2.5D dragons. Yummy dragon weenies, yay!
Version 8A is hands down the best version of Nova Furry.
I don't think so bro, what do you base that on? Did you really compare them with the others? I have to know which one is really the best😭
@Chris198 No. I just selected v8a for shits and giggles.
@Chris198 8A is my favourite since 6A. I've tried most versions.
si
One of the best!
Hey! What means "if it's not edited"? Can i see full agreement?
Edit some details like fingers, eyes or any kind of errors
Compositing colors and cropping additionally would be better
@Crody Hello! Thanks for your reply!
But I have a few more questions, please...
I am a humble indie solo developer. I am creating a dream game that will probably never get released. Yet, I don't want to violate anyone's rights...
If I edit the images created by this model through inpainting (changing clothes, colors, etc.), are these changes sufficient?
Can I use the images created by this model to create 3D meshes and UV textures for them?
PS. Of course, I will credit the use of this model in the game's credits if it ever gets released!
@teanomg If that's the case then it's fine to use it
You can use them on 3D meshes as well
@Crody Thank you so much!
Details
Files
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.



