This is a merge of my previous model Disney Pixar Cartoon Type A with my own lora trained on midjourney images. Compared to previous version, the males and old people look more like actual pixar character and females looks less anime.
I Use hires. fix, upscale latent, denoising 0.5, hires steps 20, upscale by 2 . Clip skip 2 .
You need to use VAE or the color become pale and gray
if you like the models, please consider supporting me on, i will continue to upload more cool stuffs in the future
patreon.com/PromptSharingSamaritan
https://ko-fi.com/promptsharingsamaritan
Description
FAQ
Comments (60)
You're a legend!! I have been hoping for more models like this and the original DisneyPixar model was already fantastic. Going to enjoy using this, thank you for the hard work my friend!
One thing that jumps out immediately - eyes are better vs type A model, good job
I feel like Loras work better on type A compared to type B
Definitely better at generating male characters. Though the first thing I noticed is how much better the eyes turn out. Keep up the great work.
Would you please consider making LORA versions of both A and B? I think people will like them a lot!
Thank you so much!
what is the difference between B and A? I was curious to know which I should use
Does this model support diffusers to train faces? If no, do you consider input? on hugginface?
This is pretty much a review but I don't feel like uploading any images right now. I'll get to it.
The styling on this model is epic. It's very good at staying consistent when odd topics are thrown at it. The creativity of the model is lacking though. With most models I can provide the same prompt over and over and get a good variety of poses, angles, and scenery. With DPC-type-B the outputs are very consistent if the prompt stays consistent. Furthermore subjects tend to be facing forward, centered, and smiling. It can take a good deal of prompting to break some of the biases.
I hope the author keeps up work on this model and tries to give it a little more flexibility and creativity.
yes i will update this when i have time, this is just a test with only 20 images trained lora, i usually trained more than 200
I've tried a couple of the pixar style models out there, and for TXT2IMG without Loras, this is just great, right out of the box. The others are great too and I appreciate everyone's contributions, but this one was just wonderful and I wanted to say that!
curious why you didn't do this as a new version on your first Pixar one - is this one better than A?
also just learned they have the same file name
different art style
This model looks really cool, what do you mean by a merge of your previous model and a lora? I'm very curious about enforcing a lora's style in an actual checkpoint, correct me if this is not what you did here...
You 2 different styles but they both have the same file name.
yea sorry , that was a mistake
It's still the same file with type A if u click download, pls fix this.
just change the file name
@PromptSharingSamaritan Sorry I don't get it, I download the same file from type A and B, how can I change between them? Where's the file name that you mentioned?
[@jiachenulu979]
Note 1: This is a translation from Spanish (Spain) to English. So it may contain small errors (but not code errors).
Note 2: I have given you the information as if you knew almost nothing.
I have also assumed that you use Windows, so if you use Mac, Ubuntu, etc. You will have to search for yourself how to obtain the SHA256 code.
To know if it is the same file, look at the SHA256.
If it is a different file, the SHA256 must be different.
With the following command, you can find out the SHA256 number of any file.
cls && certutil -hashfile "ADDRESS\FILE" SHA256
Replace the word ADDRESS with the address where your file is located.
Replace the word FILE with the file name (must include the file extension. Typically file.safetensors or file.ckpt).
You can skip ADDRESS if you are already in the directory of the file.
"cls" command is to clear the CMD window && then run the command.
▼ Example.
cls && certutil -hashfile "C:\Programs\stable-diffusion\stable-diffusion-webui-1.5.2\models\Stable-diffusion\dollLikeAnime_v10.safetensors" SHA256
If you don't know the file extension and you don't know how to do it.
The normal thing would be to configure your Windows to show the extensions.
But you can skip the setup for now by doing a couple more steps.
▼
Type (or copy and paste into CMD) the path and file name.
───
cls && certutil -hashfile "C:\Programs\stable-diffusion\stable-diffusion-webui-1.5.2\models\Stable-diffusion\dollLikeAnime_v
───
It does not matter if the name is cut or not since the important thing is that you write the directory where the file is located.
Now, simply press the [TAB] key and the name will be filled in automatically.
Note, if by chance, it is not the name of your file, keep pressing [TAB] and it will automatically search for files that match what you had previously entered.
Finally remember to add SHA256 so that it gives you the SHA256 code.
Now that you have the SHA256 code in your possession.
You can compare the number.
If they are the same, they are the same file (regardless of whether they have different names).
You can also compare this number with the one provided by Civitai.
You can see that code if:
On the model page → in the right area to download the model.
There is a section you can display called "1 File".
There you will see the file that you can download.
Just below the [Download] button you will see an icon (i).
Click on that icon and you will see the SHA256 information in the "Hashes" section.
Note: There are several codes.
CRC32, BLAKE3, AUTOV1, AUTOV2, ►SHA256◄
(You can view the following code by clicking on the arrow [>]).
You will not see the entire code, because it is large.
Simply click on the code and it will be copied to your clipboard (as if you had done [Ctrl] + [C]).
When I used this model earlier it used to produce picture in perfect color but now it produces dull color what is the reason? its So Dull color
it's vae issue, try disable it and then enable it again
Hello! How can I use VAE in https://stablediffusionapi.com/models/counterfeit-v25
what shall I pass in the parameter? It says null!
lora_model": null,
"tomesd": "yes",
"clip_skip": "2",
"use_karras_sigmas": "yes",
"vae": null,
"lora_strength": null,
"scheduler": "DDPMScheduler",
"webhook": null,
Could someone please tell me if there is a specific VAE that I need to use with this model or just any VAE is fine as long as I enable it while creating images with this model? Really hope I can get some answers! Thank you!
any anime vae is fine, i use yozora vae but it was deleted from this site for some reason
@PromptSharingSamaritan That really helps! Thank you very much, my man!
@PromptSharingSamaritan Any idea if this VAE is available anywhere else online? The eyes on my images are all weird. Everything else works great. But I can't get the eyes right for the life of me.
@iridepowder301 any other anime vae works just fine, you need to use hi res fix
@iridepowder301 yozora vae is still available through huggingface, you can just google it, hope it will help!
@Deluxebomb Appreciate it man, I will check it out and see if I can find it
@PromptSharingSamaritan I've downloaded Yozora VAE, used high res fix - but I'm still getting these eyes that look like there is an additional transparent lens over top of them, almost like it's double rendered. But it's JUST on the eye. I've tried using other VAE's (Grapefruit.vae), but get same result. I've turned 'restore faces' on and off. Same result.
I should mention as well that in the "render preview", it looks like the eye is rendering properly, but once it's done then the eye looks all weird.
How high should I be setting my high res fix to? What should my output image be? 1024x1024?
@iridepowder301 from my personal experience, 1024 X 1024 or even higher resolutions always give better outputs, and maybe try to add some negative eye prompts in your negative prompt section. If you haven't done so, copy and paste the prompts from the sample images and see how the outputs are, if they are fine then learn the prompts from the sample images, and improve them with your own words, this is how I test a model for the first time I download it. I'm sorry I didn't have your troubles, but maybe this will help ;)
@Deluxebomb thanks man that does help!
For people reading this in the future: Don't waste your time with Yozora VAE, it's just a renamed kl-f8-anime2.ckpt (their hashes are the same.)
Pruned version?
Noob here. I get this error when trying to use either A or B Lora. I can use others with no issue, please help.
"Failed to match keys when loading network" Followed by a ton of text
That's because it isn't a Lora. You have to put this in your stable-diffusion folder and then set it as your checkpoint
@johnnywickman123236 ty very much!
Is there a good guide to use this? I have stable diffusion 1.4 local but getting very bad results
This model is just a diamond among others reflecting the style of Disney 3d cartoons! In my opinion, it is the best of all that I have tried.
Excellent responsiveness and flexibility and an excellent recognizable cartoon style. It even worked perfectly with my test dragoncat and beaver (something that a rare model copes with at least approximately well) ...
An excellent model! A million thanks!
can I use this for my nsfw games / comics?
sure
This has the same name as the other file? But isn't a replacement for it?
How can i use this model to do img2img?
i think so
@PromptSharingSamaritan how can i do it? any resources available? what tools should i use?
its SD1.5 use comfyui there is tons of workflows. takes some time to install but its awesome. otherwise use auto1111 whatever its called. I prefer comfyui more control and so many controlnets.
@richie231186 I load the model with Load Checkpoint. and do nothing special just text prompts and the output is not so good. Tried many times, added in some other LORAs and then it started to get better. If you are getting good results, do you mind sharing a simple sample workflow? I seem to do sth wrong and in the way sometimesI also get errors but not in a consistent way.
@ozanyurdakul886 check the photos. It usually lists the Lora strength and prompts used. The Lora keyword and strength usually helps a lot. If it’s not fine tuned you need to use the keyword
I am conctantly getting "KSampler
'NoneType' object has no attribute 'shape'" in ComfyUi. what can I be doing wrong? I am loading the model with checkpoint loader ? Complete newbie so sorry if the question is dumb or does it have sth to do with the model version?
Im using Foocus and i cant even get it to generate an image!
what is the triggerword?
You mention we need to use VAE.....but where is that file for this checkpoint??
Need a bit help....pictures indeed come out pale/grey.
google yozora vae huggingface
@PromptSharingSamaritan thank you!
this the configs that I've used to get good image-to-image output with 4070
Prompt:
Transform this source image into a Disney-inspired illustration, keeping the characters’ original features, clothing, and overall look. The characters should be reimagined in the classic Disney animation style, while preserving their natural proportions and expressions.
Female characters should maintain their natural femininity with soft, delicate facial features, larger expressive eyes, and graceful, elegant poses. They should not appear overly muscular or masculine—ensure their figures and expressions retain a natural, feminine quality typical of Disney princesses or heroines.
Male characters should also have strong, masculine features, but not in an exaggerated or hyper-masculine way. They should have defined jawlines, broad shoulders, and heroic postures, but avoid making them look too bulky or overpowering. Their facial features should remain balanced with natural masculinity, and they should exude strength in a way that feels realistic, not exaggerated.
Ensure both female and male characters look true to their original appearance, with proper eye alignment—no cross-eyed or unnatural gazes. Male characters with facial hair (mustache and beard) should have these elements preserved, keeping them distinct and true to the original style.
The color palette should remain vibrant, bold, and clean, with bright saturated tones typical of Disney animation, while backgrounds should convey an enchanting and magical atmosphere.
Negative Prompt:
Do not make female characters appear masculine or overly muscular—ensure they retain a soft, feminine appearance with natural features and proportions. Avoid making male characters too overly masculine, bulky, or exaggerated. Their features should reflect realistic strength, not an unrealistic level of muscle or size. Avoid making both male and female characters appear unnatural in terms of their body proportions or facial features. Do not misalign the eyes or make them cross-eyed—ensure that the eyes are proportionate, well-aligned, and naturally expressive. Do not remove or alter facial hair on male characters; the mustache and beard should be preserved and clearly visible, fitting the Disney style. Avoid dull or muted colors—ensure a vibrant, clean, and lively Disney color palette. Avoid overly simplifying the characters to the point where their original essence is lost, especially in terms of natural features and expressions.
Seed:
3013823782
CFG Scale:
12
Denoising Strength:
0.45
Sampling Method:
dpm++2M
Sampling Steps:
80
Details
Files
disneyPixarCartoon_v10.safetensors
Mirrors
disneyPixarCartoon_v10.safetensors
disneyPixarCartoon_v10.safetensors
disneyPixarCartoon_v10.safetensors
583_disneyPixarCartoon_v10.safetensors
disneyPixarCartoon_v10.safetensors
disneyPixarCartoon_v10.safetensors
dpc_v10.safetensors
disneyPixarCartoon_v10.safetensors
disneyPixarCartoon_v10 (1).safetensors
disneyPixarCartoon_v10.safetensors
disneyPixarCartoon_v10.safetensors
disneyPixarCartoon_v10_2.safetensors
DisneyPixarCartoontypeB.safetensors
disneyPixarCartoonTypeB_v10.safetensors
DsnPixrrCartoon_v10.safetensors
disney-pixar.safetensors
stylized_disneyPixar_vb.safetensors














