Anima version is available!
Nova Furry XL
A 2d/2.5d furry checkpoint model that can have great details on any type of furs, scales and feathers, which aims to be XL-fied Peaki Furry
Rules
You cannot use the generated images for commercial use if it's not edited (or just turning it to black and white)
Advertising is always welcome
Recommended settings:
Sampler: Euler a for best performance
Steps: 20-50 steps
(Pony)
CFG Scale: 6-7
Prompt: score_9, score_8_up, score_7_up, score_6_up, score_5_up, score_4_up, source_furry, BREAK, furry, anthro, detailed face eyes and fur, {Animal Type}
Negative Prompt: blurry, jpeg artifacts, username, watermark, signature, normal quality, worst quality, large head, low quality, text, error, missing fingers, extra digits, fewer digits, bad eye
(Illustrious)
CFG Scale: 3-5
Prompt: masterpiece, best quality, amazing quality, very aesthetic, high resolution, ultra-detailed, absurdres, newest, scenery, furry, anthro, {Prompt}, BREAK, depth of field, detailed fluffy fur, volumetric lighting
Negative Prompt: human, multiple tails, modern, recent, old, oldest, graphic, cartoon, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured, long body, lowres, bad anatomy, bad hands, missing fingers, extra digits, fewer digits, cropped, very displeasing, (worst quality, bad quality:1.2), bad anatomy, sketch, jpeg artifacts, signature, watermark, username, simple background, conjoined, bad ai-generated
Description
Apply Noob 1.0 Structure and Fix some detail problems
FAQ
Comments (94)
v-pred version when? :)
This probably is
@Crody I think you undo the vpred unless you specifically train with vpred. I just tried running with vpred settings and its all grey
@antiman9 Vpred is available for forge out the box, no reason to keep using A1111
@metal079 came here to say the same thing. Got a grey image in reForge using the "Advanced Model Sampling" option for vpred, works fine for any other vpred model.
Not a big deal I guess, but certainly not vpred either.
@kuroe0 To reveal the process of merge, I actually merge v-pred 1.0 just for swap/merge text encoder and IN06-08 with e-pred 1.1
This results in same color as e-pred and great posing like v-pred without using v-pred plugin (which is crucial for Diffusers user)
So excited to give 3.0 a try! this has been my fav model since 0.8 came out. Anything super new with this version or just general improvements. keep up the AMAZING work!
May I ask a question?
Could you allow me to upload Illustrious v0.8~v2.0 and pony v1.0~v6.0 on Tensor?
I don't monetize anything including support for the account.
I will be sure to link to the original and include the rules in the description.
The permission was already granted to me, so stop bothering me.
@Miguel45
Is this your account? https://tensor.art/u/728067922689492343
@miao1973 Yes, he gave me permission. In fact, he even mentioned that he has no problem with anyone sharing his models without prior permission, as long as they’re not monetized and the proper links are included. That’s why I find it absurd that you’re complaining just because someone else uploaded it first.
Also, what monetization are you talking about? I removed it several days ago because, honestly, I only added it as an experiment, and I didn’t even make any money from it. So stop whining like a child and causing trouble just because someone else uploaded the model before you. Crody has no issue with people sharing his models without asking, as long as his conditions are respected.
@Miguel45
https://civitai.com/models/503815?commentId=649212&dialog=commentThread
I'm talking about tensorART's Buffet plans.
I believe that any attempt to gain benefits from a model created by someone else is unacceptable, even if it is an experiment.
I understand that you removed it because you couldn't monetize it.
@miao1973 For your information, I never tried to monetize someone else’s models. The Buffet plan was something I created a while ago when I was experimenting with the idea of monetizing my two original models. However, it never generated any income. If you don’t believe me, I’d be happy to show you my profile, where you can see that I’ve earned a total of 0 since I had the Buffet plan active.
That said, I don’t understand why you’re bringing this up if I deleted the Buffet plan a long days ago. Could you please stop harassing me just because someone else uploaded a model before you? As I mentioned, today Crody gave me permission to upload this model, and a few days ago, he also gave me permission to upload his models. He even told me that he didn’t mind if other users uploaded his models, as long as his conditions were respected.
So, I don’t understand why you’re behaving this way, like a child, every time someone uploads a model before you. Your attitude seems childish and absurd, especially when Crody himself doesn’t care who uploads his models, as long as his terms are respected. Please mature and act like the adult I assume you are.
@Miguel45
By "a long days ago," do you mean before I pointed this out on 1/5 on Tensor's discord?
I don't think a week ago would be considered a long days ago.
And even though it was a different model, you can't deny that you set it up to monetize.
Do you think it's okay because you have zero income?
The setting is supposed to be like YouTube's Superchat, where if no one uses it, there's no benefit.
Your setting must have been wrong, or no one used the feature.
Well, it seems like you were helped by it this time.
You're acting like a child.
Im not smart enough to figure out if this v3 description means it has noob merged, or is somehow just... the structure, and what that means? C': Its not mentioned that it would be merged, but what is just the structure then?
"Apply Noob 1.0 Structure and Fix some detail problems"
The base model is noob 1.0
So by applying, means that it has the variety in prompt as much as noob 1.0 has
@Crody didn't you use noob 1.1 in "illustrious v1.0"? is there any advantage in going back to noob 1.0? really just asking out of curiosity.
@kuroe0 I used Epred 1.1 on Illustrious v1.0 and for this, I used Vpred 1.0
@Crody Oh wow, am i just really missing it this hard, or is neither noob nor v-pred really mentioned outside the small version description? I would maybe add "Noob 1.0 v-pred" to the version name, instead of Illustrious... because thats not it anymore. :'D
But i mean this is good, and why i asked, because i had no interest in using another IL model after using Noob models, and almost wouldnt have downloaded this because it says IL. Now ill ofc download it.
Maybe you need to check this again, as this loads as an EPS model instead of a VPRED model, and it does not do the darker and more contrast-rich gens that base noobAI VPRED does ...
@Xeno443 Yeah, ive gotten around to test it a bit now, and its definitely an eps model, itd be interesting to know what exactly @Crody meant. o:
I wish i could use it in Civitai, just an awesome Model
I know, right??? This makes anthros turn out so good on illustrious!
What's the reason why we can't use on civitai? :/
@Temari44 Because they: CivitAI, doesn’t have enough spaces to run this model
Update: Available!
where do I start even, I downloaded it but I don't understand how to do checkpoints im new. and the scripter too technical I got the model planner plugged in not sure what the "planned" text path is, I got vae link for civi, and the API for civi,m huggingface through me for a loop and I got a repo key but I can't code so I stopped about 40 minutes in trying to figure it out, what do I do now?
Please redownload the model planner
Planned text would be the file you write for the merge
If you just want to use the model, you can use the template below and select the text file with this sentence:
(Assume you want to use the Nova Furry IL V3)
+NF, https://civitai.com/models/503815?modelVersionId=1279960
PR NF NovaFurryILV3
After you press the Save plan as ipynb, go to kaggle and import the saved ipynb
Run first and second block
After that, run the one that starts with Pipe Config
Now, you can do t2i using the block below (after edit it to what you want)
Is there a effective prompt not to trigger 2.5d style and produce stable 2d anime artstyle?
cell shading or remove photorealistic details
@Crody You mean cel_shading? I tried , cel shading, anime_coloring, uncensored, newest on positive, , scan, old, oldest, (blurry, blurry_eyes, blur:1.14), white pupils, glowing eyes, watermark, artist name, bad anatomy, questionable anatomy, realistic, photorealistic details, photorealistic on negative. But it can't give 2d artstyle. Could you give some example pictures?
@laoxs I've had the most luck using terms like 'minimalism' and 'flat colors' along with putting 'shiny skin' in the negatives. You can lower the strength of the terms if you want to soften the effect, and removing your quality terms (Masterpiece, best quality, etc) can help as those tend to push things toward 2.5 styles.
That said, the easiest and most consistent way is to put artist names with the kind of style you're looking for in the prompt (mizumizuni, yukie \(kusaka shi\), kakure eria), or to find a LoRA of the style you prefer and use that with this checkpoint.
I love the Illustrious 2.0 version of this checkpoint, but I think the Illustrious 3.0 version kinda lost its way. Everything feels less detailed, the lighting is worse, it just feels a little bit more lifeless.
After another tones of testing, The V3 model still can't recognize prompt describing characters' relative position. Although it can generate other trait difference such as size and age, however tags related to positions like 'male_on_top, boy on top, female on bottom, smaller_on_top, larger on bottom, female on top, girl on top, male on bottom, larger on top, smaller on bottom' still not working. I wonder if anything went wrong.
I never have issues revolving the prompting on position
For size, try using 'size difference'
For position of male/female, try using 'gender on gender'
@Crody May I have your example prompt generating interactives? I failed to distinguish which character lying on back, about one third of the result is reversed position
@laoxs https://imgur.com/a/bMyzBWK
Prompt: masterpiece, best quality, amazing quality, very aesthetic, high resolution, ultra-detailed, absurdres, newest, 1boy, 1girl, female on male, furry, anthro, BREAK, red wolf girl, blue eyes, ahegao, rolling eyes, smile, open mouth, tongue out, drooling, long red hair, gigantic breasts, small waist, spread legs, nude, puffy nipples, pussy, BREAK, black fox boy, muscular, yellow eyes, gigantic penis, BREAK, vaginal, sex, female on top, straddling, duo, living room, from below, dutch angle, motion blur, floating hair, BREAK, eyes, detailed eyes, detailed fluffy fur, photorealistic details, volumetric lighting
Negative is same as the presented one, CFG = 4.5, Step=20
Using BREAK would separate which character to do it
Trying to get Illustrious 3.0 working but it outputs a pitch black image every generation. Other XL and Pony models work just fine with the exact same setup. Are there any perquisites? I installed the Vae file and moved it into the vae folder as well, then mounted it in the settings, but still black.
VAE is baked
I figured out since, apologies, my startup options were the issue. One of the following caused it: --opt-sdp-attention --opt-channelslast --autolaunch --disable-nan-check
I changed all of them and now it seems to be fine. Though --medvram keeps it from running out of VRAM a lot more.
@Erosito I use Diffusers+sd_embed on Kaggle
That way you can reduce the use of VRAM since there is no gpu needed for ui
For the script, check out suggest resource and this thread
Lovely Model that you've made thus far! Although I do have a question in terms of the lighting it seems to default to.
Is there any way to generate images with low lighting (such as dark forests, a room with all lights turned off, and etc)? It seems that no matter what image I generate, the character always has a light shining down onto them.
Add something like dim light, low contrast or dark to the prompt
Check out the sample image for the context
Please explain why the word "BREAK" is needed? What does it do? How to use it correctly?
BREAK add prompt 75 additional blank token
This allows to make the prompt apparent
For example, in some generation, red hair girl, blue hair boy would generate red haired boy and red haired girl (or opposite) but by adding BREAK between them, you can create both of them without mixture
@Crody Thank you, it means my guesses were correct, I'll try it soon.
@MAXMORD BREAK can be essential in keeping "bleed" to a minimum. It isn't perfect but it can help. Using it properly can really improve gens a lot.
The prompt gets sent in 75 token "chunks". If you type a prompt less than 75 tokens, your prompt takes up x amount and the rest is "padded". If you use 76, 1 chunk of 75 gets sent, and then the next chunk would have 1 token from your prompt- with the rest padded. it would still be 150 tokens that get sent in total whether your prompt was 76 tokens or 143. In some software, like webui and it's derivatives, typing BREAK in the prompt will pad the rest of the chunk it is in (in comfyui it is "conditioning concatenate"), and after BREAK will be the start of a new chunk. People use this to reduce concept bleeding. Concept bleeding is when "blue" is in one chunk and it describes the hair, but then the image also has blue clothing, or a blue background, etc. Separating the blue hair into a different chunk from the clothing would reduce this bleeding effect.
Be warned, however, the more "chunks", the slower the generation, and the less impactful any given chunk is on the whole image.
Does this model understand natural language?
Worse compared to prompt language but it'll work
Combining both of them creates great images
@Crody Hi, this question and answer has me intrigued. Is natural language/prompt language referring to the way the model interprets the prompt? If yes, where could I find more about how to combine/improve them, as you suggest?
@kevothokh Natural language means that you basically use in conversation like "the girl sitting under a tree looking at sky"
Combining with prompt language means to add danbooru tags after/before the sentence like "1girl, solo, outdoor, the girl sitting under a tree looking at sky, blue sky, tree, looking at another"
@Crody Ah, I see. Thanks for the explanation
I really love this model and I use V4.0 at the moment. Currently, I try to make multiple characters images. I saw some support modules like Openpose but they don't seem to work with this model. Maybe I am wrong to ask here but I don't want to go away from this model, so do you have some guidance here?
I can't say much about Pony models but they are sdxl so using BREAK would separate each characters' likeness thus you can create better interactions
I suggest using newest version (Illustrious v3.0) and that way you can create multiple character images without using openpose
Checkout other images with prompts
I still have hope for a PONY 7.0 version
when using image2image i keep getting a blank image with an error
[Bug]: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
is there a fix for this? ive tried every version
I encountered this kind of problem around SD1.5 era when we still can use the A1111 on free Colab
I fixed this issue when I place LoRAs or any other kind of model into each separated folders Check out this page for the possible fixes https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6923
If possible, I'd love to see Illustrious v3.0 added to the site generator!
Now they are!
Bloody amazing checkpoint.
What's the point of posting this if people can't even use it? Is it just an obnoxious flex or what?
If you’re struggling with using sdxl models, you can use this on PixAI or locally instead
For me, using it on Kaggle with diffusers+sd_embed have the best performance
If you’re saying that illustrious model is unusable, just think it as same prompting as sd1.5 models
That way you can enjoy using models including mine full potential
Where's this?
@kash35 Currently not supported because of the lack of space in CivitAI database to run it on onsite generator
@Crody Seriously??? Figured they'd be prepped for this sort of thing, but I guess not. lmao
Update: Now there is!
Does anyone have novaFurryXL_illustriousV30.VAE? Only found it on one site, but can't download it from there.
VAE is already baked
In case of wondering, the vae is sdxl vae
@Crody Got it. Then I don't understand why my images with this model are not as sharp as others. It's kind of blurry. (I use Forge and Comfy)
@Twostep Did you use the recommended prompt and negative prompt on Illustrious one ?
The key for the sharp image is to use quality tags and "volumetric lighting"
Also, adding angle/shot tag to the image would help you make better images
Check out the prompts on sample images or community ones
@Crody Turns out it was my fault. If the forge has Euler and Euler a, then in Comfy I did not notice that I used Euler. 😑
THIS IS BEST BASE MODEL EVER!!!!
NICE FINGER, NICE TEXTURE, NICE FACE
if you use this checkpoint, You don't need anymore to use Lora for get a satisfactory image.
Lora are still required for any characters, styles, or concepts the model is not trained on,
Good stuff. If it had a few preview images with non-furry characters, then more people would know that it can support non-furry content as well?
Noob question, how do you use this checkpoint after downloading? I'm rather new to this stuff.
Depends on what kind of program you use
You can use Forge, A1111 or Comfy for local but they need GPU with more than 15 GB of VRAM
If you don't have it, you can use it on Kaggle with diffusers which I documented on the article in suggested resource
Excuse my ignorance...
Can I use Pony lora on this checkpoint?
For Pony versions: Yes
For Illustrious versions: Probably No
I have done some testing with Illust v3, and sometimes Pony LoRA will work kinda OK, but it will always be better to use Illustrious trained LoRA for better results.
You can absolutely use Pony (and SDXL) lora with Illustrious models, but:
*They are often very muted in effect. You'll probably need a higher weight on them to get the same effect as with the type of model they're trained on. For style lora, my usual rule of thumb is a pony lora used on Illustirous (or vis versa, this all applies in the other direction too!) is you need twice as strong a weight to get an equivalent result. Which holds true until it doesn't, some seem to work fine.
*Some types work better than others
*Some will not have quite the same result
*Some will be more corrupted - I notice more trouble with details getting garbled sometimes.
*Some will just flat-out not work, or at least not to any useful extent
So... Basically, "Maybe, maybe not, YMMV, try it and see what happens, management is not responsible for any abominations beyond human comprehension that get produced, may the gods have mercy on your soul."
Im new to SB, so i want to ask - in the "about this version" note you say "Apply Noob 1.0 Structure and Fix some detail problems" what's Noob 1.0 structure, and how do i apply it?
Noob 1.0 supposed to be v-pred noobXL 1.0 but it runs on e-pred
This means that it has the pose of v-pred but can run on e-pred settings: which is much better in my opinion considering the color / contrast and setting for v-pred
You can apply this using block merge
No BASE, just IN06-08
@Crody Im sorry, i wasn't very clear. What i meant is i don't know a lot about programming, and software related stuff in general in the first place, so i have a follow up questions, what is IN06-08, and what multiplier and other settings should i use in the merger?
@ErickoEscobar6309 Are you using Model Merge Scripter?
Then you can know it by following the article on the suggested resource
The alpha for block merge can be written like the following
"BASE, IN00, IN01 ... IN08, MID00, OUT00, OUT01 ... OUT08"
So if you want to apply B's posing to A, you can use the following
CM A + B "0,0,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0" Result
@Crody Just by looking at this merge plan example i'm getting suicidal thoughts. I don't understand three things, first is this line : "CM DHP + MKF +S SXL 0.2 0.3 _A" what is DHP, there was nothing named DHP, so where is it coming from? The second is "CM E + B "0,1,1,1,1,0.3,0.2,0,0,0,0,0,0,0,0,0,0,0,0,0,0.4,1,1,1,1,1" NovaFurryV3" what's with this alpha? Are these values? If yes, what does this number sequence represent. And the third thing is what's the example for alpha using IN06-08 is it like "CM ABC + XYZ IN06 _A"? I'm really sorry for asking you this much, i really know nothing about programming.
@ErickoEscobar6309
The DHP was checkpoint called Dead Horse Pony (which I forgot to add in the script by using +DHP, model link): the model which is already removed by the owner
The last example with using CM on the previous comment is the script that swaps B's IN06 to IN08 into A
@Crody Ok, let's say i understand, but do i need to swap anything if i want to just merge NoobAI with NovaFurry? Wouldn't this work
"+NF3, https://civitai.com/models/503815?modelVersionId=1279960
+NAI, https://civitai.com/models/833294?modelVersionId=1190596
CM NF3 + NAI IN07 NovaFurryxNoob"
or am i fucking retarded, and that's not how alpha works?
@ErickoEscobar6309 The problem is that I implemented the List function: not Dict function for the block merge recognition because that is easier to do when you use something like argparser
So right now, you have to write all of them in order to get the right merge
It is hard to get used to but it'll be much easier when you understand how it works
By the way, the scripter creates the ipynb notebook which works on Kaggle by importing it
@Crody Okay i made big advancements, that is i completed and now (almost) fully understand the script, completed merge plan, ran main.py and got txt file, what do i do with it now? (i use forge, not kaggle)
@ErickoEscobar6309 Forge isn't gonna work with it
You need Kaggle account: the website that can run GPU online, in order to use it
This way you don't have to think about enough spaces
Also, press Save as ipynb instead
After you made the Kaggle account, authorize it with your phone and now you can use T4x2 GPU up to 30 hrs/week
@Crody Hmm, i see. Welp im gonna stay on Forge for now, maybe will try Kaggle later. Anyway, thank you very much for help.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.


