!!! UPLOADING/SHARING MY MODELS OUTSIDE CIVITAI IS STRICLY PROHIBITED* !!!
Check my EXCLUSIVE models on Mage.Space: AniMage PXL • AniReal PXL • Lucid Dream • AniMage SD1.5 • Realistic Portrait
SDXL - Pony: AniVerse PXL • AniMerge PXL • AniToon PXL • AniMics PXL • AniVerse XL
SD1.5: AniVerse • AniThing • AniMerge • AniMesh • AniToon • AniMics
Also in Collaboration with Shakker.ai
This model is free for personal use and free for personal merging(*).
For commercial use, please be sure to contact me (Ko-fi) or by email: samuele[dot]bonzio[at]gmail[dot]com
⬇Read the info below to get the high quality images (click on show more)⬇
Animerge - is experimenting
This is a experimental project, while I'm training the new Aniverse model
So while I'm waiting, I like to experiment with some merges of Aniverse or Animesh with other models.
-> If you are satisfied using my model, press on ❤️ to follow the progress and consider leaving me ⭐⭐⭐⭐⭐ on model review, it's really important to me!
Thank you in advance 🙇
And remember to publish your creations using this model! I’d really love to see what your imagination can do!
Recommended Settings:
Excessive negative prompt can makes your creations worse, so follow my suggestions below!
Before applying a LoRA to produce your favorite character, try it without first. You might be surprised what this model can do!
A1111 my settings:
I run my Home PC A1111 with this setting:
set COMMANDLINE_ARGS= --xformers
if you can't install xFormers (read below) use my Google Colab Setting:
set COMMANDLINE_ARGS= --disable-model-loading-ram-optimization --opt-sdp-no-mem-attention
My A1111 Version: v1.6.0-RC-28-ga0af2852 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2
If you want activate xformers optimization like my Home PC (How to install xFormers):
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "xformers"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you can't install xFormers use SDP-ATTENTION, like my Google Colab:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Optimization"
in: "Cross attention optimization" select: "sdp-no-mem - scaled dot product without memory efficient attention"
Press in "Apply Settings"
Reboot your Stable Diffusion
How to emulate the nvidia GPU follow this steps:
In A1111 click in "Setting Tab"
In the left coloumn, click in "Show all pages"
Search "Random number generator source"
Select the voice: "NV"
Press in "Apply Settings"
Reboot your Stable Diffusion
If you use my models, install the ADetailer extension for your A1111.
Navigate to the "Extensions" tab within Stable Diffusion.
Go to the "Install from URL" subsection.
Paste the following URL: https://github.com/Bing-su/adetailer
Click on the "Install" button to install the extension
Reboot your Stable Diffusion
How to install Euler Smea Dyn and Euler Max Sampler:
In A1111 click in "Extensions Tab"
click in "Install from URL"
Under "URL for extension's git repository" put this link: https://github.com/licyk/advanced_euler_sampler_extension
Once installed click in: "Installed" Tab
Click in "Apply and quit"
Reboot your Stable Diffusion
Now at the end of the list of the sampler, you have the new sampler.
How to use ADetailer with Euler Smea Dyn and Euler Max Sampler:
In A1111 click in "txt2img" tab
Expand and click in "enable ADetailer"
Scroll down and expand "inpaint" section
Click and turn on "Use separate Sampler"
Now select: "DPM++ 2M Karras" (or your favourite sampler)
VAE: VAE is included (but usually I still use the 840000 ema pruned)
Clip skip: 2
Upscaler: 4x-Ultrasharp or 4X NMKD Superscale
Sampling method: DPM++ 2M SDE Karras
Sampling steps: 50+ (minimum, it is very important not to go below this value)
Width: 576 (o 768)
Height: 1024
CFG Scale: 6~7
MY FAVORITE PROMPT:
(masterpiece, best quality, highres:1.2), (photorealistic:1.2), (intricate and beautiful:1.2), (detailed light:1.2), (soft lighting, side lighting, reflected light), (colorful, dynamic angle), upper body shot, fashion photography, YOUR PROMPT, dynamic pose, light passing through hair, (abstract background:1.3), (official art), (perfect skin), (sharp)
NEGATIVE PROMPT:
(worst quality:1.8, low quality:1.8), moles, mole, tears, piercing, freckles, skindentation, cutoffs, shiny skin, lucid skin, pendant, scars on face, interlocked fingers,
YOU CAN ALSO USE THESE NEGATIVE EMBEDDINGS:
1) Easy Negative, negative_hand-neg, EasyNegative, negative_hand-neg, moles, mole, piercing, tears, face skin imperfection, freckles, skindentation, cutoffs, shiny skin, scars on face, bad-hands-5
2) Bad-Images-39000, negative_hand-neg, moles, mole, piercing, tears, face skin imperfection, freckles, skindentation, cutoffs, shiny skin, scars on face, bad-hands-5
3) ng_deepnegative_v1_75t, (worst quality:1.4), (low quality:1.4), (normal quality:1.4), lowres, bad anatomy, bad hands, normal quality, ((monochrome)), ((grayscale)), ((watermark)), negative_hand-neg, moles, mole, piercing, tears, face skin imperfection, freckles, skindentation, cutoffs, shiny skin, scars on face,bad-hands-5
4) FastNegativeV2, negative_hand-neg, moles, mole, piercing, tears, face skin imperfection, freckles, skindentation, cutoffs, shiny skin, scars on face bad-hands-5
5) For MEN images: girl, woman, female, tits, BadImage_v2-39000, negative_hand-neg, moles, mole, piercing, tears, face skin imperfection, freckles, skindentation, cutoffs, shiny skin, bad-hands-5, scars on face, bad-hands 5
HiRes.Fix Setting:
I don't use Hi.Res fix because:
1) in my computer don't work
2) my models don't need it. Use txt2image, aderailer and the suggested upscaler in the resources tab.
If you still want use it, this is the setting sent me by MarkWar (follow him to see his creations ❤️).
Hires upscale: 1.5
Hires steps: 20~30
Hires upscaler: R-ESRGAN 4x + Anime6B,
Denoising strength: 0.4
Adetailer: face_yolov8n
How to install and use adetailer: Click Here
Inpainting Setting:
When you see that I used Inpainting on my images, I only modify the face (Hires Fix on my old PC doesn't work and got stuck). This is my setting:
Click in the tab img2img, than click on inpaint ->
Paint the face (only the face, neck, ears...) and after that set:
Inpaint masked
Only masked
Only masked padding, pixels: 12
Sampling steps: 50
Set: Only masked
Batch Size: 8
in the Positive Prompt write: (ultra realistic, best quality, masterpiece, perfect face)Than click on GENERATE
ControlNet & Prompt guide video tutorial:
Thanx to: tejasbale01 - Spidey Ai Art Tutorial (follow him in youtube)
Animesh Full V1.5 + Controlnet | Prompt Guide |
Do you like my work?
If you want you can help me to buy a new PC for Stable Diffusion!
❤️ You can buy me a (Espresso... I'm italian) coffee or a beer ❤️
This is the list of hardware if you are courius: Amazon Wishlist
I must thank you Olivio Sarikas and SECourses for their video tutorials! (I'd really love to see a your video using my model ❤️ )
You are solely responsible for any legal liability resulting from unethical use of this model
(*) MarkWar is authorized by me to do anything with my models.
(**) Why did I set such stringent rules? Because I'm tired of seeing sites like Pixai (and many others) that get rich on the backs of the model creators without giving anything in return.
(***) Low Rank Adaptation models (LoRAs) and Checkpoints created by me.
As per Creative ML OpenRAIL-M license section III, derivative content(i.e. LoRA, Checkpoints, mixes and other derivative content) is free to modify license for further distribution. In that case such is provided by licensing on each single model on Civitai.com. All models produced by me are prohibiting hosting, reposting, reuploading or otherwise utilisation of my models on other sites that provide generation service without a my explicit authorization.
(****)According to Italian law (I'm Italian):
The law on copyright (law 22 April 1941, n. 633, and subsequent amendments, most recently that provided for by the legislative decree of 16 October 2017 n.148) provides for the protection of "intellectual works of a creative nature", which belong to literature, music, figurative arts, architecture, theater and cinema, whatever their mode or form of expression.
Subsequent changes, linked to the evolution of new information technologies, have extended the scope of protection to photographic works, computer programs, databases and industrial design creations.
Copyright is acquired automatically when a work is defined as an intellectual creation.
Also valid for the US: https:// ufficiobrevetti.it/copyright/copyright-usa/
All my Stable Diffusion models in Civitai (as per my approval) are covered by copyright.
Description
You can find my model in MAGE:
https://www.mage.space/play/a6d517ff1ad74d196bbaec440caa14cc
->
The File 1.99Gb is the fixed version, the 1.60Gb is the bugged one.
Why maintain all two files?
Because the two models give you different result, but at someone the 1.60Gb will not work.
We have understand how it work the bugged (1.60gb) version work, and I think it's a funny and interesting bug.
I try to explain to you with my horrible english
1) In A1111 load a random model (X)
2) Generate an casual image
3) Load Animerge 2.0 Bug version
4) Generate an image and save it
5) Load another Model (Y) - (different to Model X)
6) Generate a casual image
7) Load again Animerge 2.0 Bug version
8) Click over "PNG Info" tab
9) Load the saved image
10) Click over send to "TXT2Img" button
11) Click on Txt2img tab
12) Press generate
Now you have 2 different images with same parameters, seed, cfg, ect ect. using the same model ;)
I think it's interesting, because if you have patience, you can find the style that you like.
EDIT 2: The examples galleries are from the bugged one, I'm preparing the galleries with the fixed one.
BOOOOOMMMM!!! SURPRISE!!!!
LoL XD XD XD
In all sincerity?
I would never have thought of such a result!
Personally I am very proud of this model, because I succeeded in what I had set myself: maintaining the style of Aniverse, making it realistic.
I also have to admit that it was really pure luck (in italian we says: "botta di culo", that translated littery is: "a hit in the 4$$" (that means that we got a random luck), because I'm still a total noob when it comes to creating models via merging.
I sincerely thank the creator: epinikion (one of the best creator here) for his model: epiCRealism. Merging it with mine Aniverse (an unpublished version of Aniverse), without which this version of Animerge would never have been born.
Of course, Animerge 2.0 is not without flaws.
Sometimes it doesn't follow the prompt perfectly, other times the irises aren't perfect and the images created are sometimes a little too NSFW, but nothing too excessive.
So have a good time, and I really hope you like this version as much as I do!
Shoot your best results in the gallery!!!
PS: I'm not sure about that, but maybe I prefere use Easy Negative instead FastNegativeV2 with this version of Animerge, let me know which one do you prefere!
FAQ
Comments (43)
The File 1.99Gb is the fixed version, the 1.60Gb is the bugged one.
Why maintain all two files?
Because the two models give you different result, but at someone the 1.60Gb will not work.
this is such a great model! could we possibly have an SDXL version?
Hi Sandro! Thank you ❤️.
Sadly I try to train an XL model, but for my GPU (2060 with 12gb of vram) is littery impossibile. The training duration was over 620 days... :_(
I tried to copy all the information from the image you posted on the girl with the robot body, but it doesn't look the same as yours... even though I took all the information from how it was created... it doesn't look the same as yours. could you explain it to me? did you change it or did you update it? I'm waiting for your reply.
Hi veq! Which file have you downloaded? Because all my galleries are made with the bug version. (yesterday I updated the model there are two files, a fixed one and the bugged one).
Also I'm using xformers, so the images that I made can be a little different from yours if you haven't installed xformers (0.20).
My A1111 Version: v1.6.0-RC-28-ga0af2852 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2
EDIT: have you take also the width and the height? (512x904 px by the way)
Can you post a your i.age that are cloning my prompt? So I can help you better?
@Samael1976 Of course, I'll post it here for you to analyze.
@vequinnt910 perfect!
EDIT: I've start to upload new galleries created with the fixed one model
@Samael1976 i just posted the image i tried to replicate... it looks different from mine...
The first thing that I saq is that in adetailer you have added: 1girl.
Now I try to replicate your image.
(the best thing that you can do is download a my image, go in A1111, click in "PNG info" tab, upload the my image, click over: "Send to txt2img" button, click over "txt2img" tab, then generate
@Samael1976 so... I swear I did it... I put the original image in that png info tab, but it always generates this image that's different from yours...
I used PNG info tab with your image in my A1111. The results are different, I think there is some difference in xformers, but it's only a my idea, I'm not sure about it.
Have you used the same VAE?
1) the first image is the yours one
2) the second image is made with FIXED version
3) the third image is made with BUGGED version
https://civitai.com/posts/630417
@vequinnt910 LoL, I trust you :) I'm trying to understand... just a curiousity, have you an nVidia graphic card? which version of xformers have you installed?
@vequinnt910 and another question: have you installed easynegative embedding?
Ok, I've added a new image:
https://civitai.com/images/2674949?period=AllTime&sort=Newest&view=categories&modelVersionId=166190&modelId=144249&postId=630417
without using EasyNegative.
Like you can see, the image now is more similar to yours.
I think that you are using a different easynegative respect mine
@Samael1976 do you have discord? i wanted to show you live...
@vequinnt910 yes I've but I'm at work now, I can't, sorry!
@Samael1976 all right. i'll try to install everything again and review everything. thanks for your help
@vequinnt910 I'm sorry that I can't help you more than that...
PS Under your A1111 what are your data about that:
My A1111 Version: v1.6.0-RC-28-ga0af2852 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: 0.0.20 • gradio: 3.41.2
@vequinnt910 PS i'm in pause for dinner now, if you want we can try discord
@Samael1976 it's the same version... i sent you a request on discord, i'll send you a video i'm going to record... doing the procedure
@_YORU_ I'm also doing tests, using Mac I don't have xformer and the images change a lot. Yoru do you have xformer? thx
@Romanowvv yes, it has xfprmers (we had a meeting in discord ;)
@Romanowvv PS I don't know if you read the "about this version". I copy here:
The File 1.99Gb is the fixed version, the 1.60Gb is the bugged one.
Why maintain all two files?
Because the two models give you different result, but at someone the 1.60Gb will not work.
We have understand how it work the bugged (1.60gb) version work, and I think it's a funny and interesting bug.
I try to explain to you with my horrible english
1) In A1111 load a random model (X)
2) Generate an casual image
3) Load Animerge 2.0 Bug version
4) Generate an image and save it
5) Load another Model (Y) - (different to Model X)
6) Generate a casual image
7) Load again Animerge 2.0 Bug version
8) Click over "PNG Info" tab
9) Load the saved image
10) Click over send to "TXT2Img" button
11) Click on Txt2img tab
12) Press generate
Now you have 2 different images with same parameters, seed, cfg, ect ect. using the same model ;)
I think it's interesting, because if you have patience, you can find the style that you like.
@Samael1976 Yes, I have seen the existence of 2 models (fixed 2gb and not fixed 1.6gb). Then I'll try what you tell me, thanks. now I was focusing on the weight of the absence of xformer on the final result: I was trying to do 'send to txt2image' of that one https://civitai.com/images/2674949?period=AllTime&sort=Newest&view=categories&modelVersionId=166190&modelId=144249&postId=630417 but my result is very different, especially on the brightness of the LED inserts of the robotic parts and in general on the image surface. The position is also different from the copied image, it is mirrored. Is it possible that all this is due to the absence of xformers? version: v1.6.0 • python: 3.10.11 • torch: 2.0.1 • xformers: N/A • gradio: 3.41.2 • checkpoint: b1fdf327fa
@Romanowvv I really really don't know if xformers give these difference... do you want an advice? Wait until monday, will be released the new version :)
@Romanowvv i have, this is just a bug... rare even... but you can replicate it using an extension called, Checkpoint Model Mixer
@Romanowvv Little preview:
https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1ec96915-9548-41c1-954a-fcc6ff98fa82/width=2048/01h.jpeg
@_YORU_ Thank you. bug in what sense? did you use 'Checkpoint Model Mixer' for your result? did you mix the fixed model with the non-fixed one? tell me please how should I use it, thxxx
@Romanowvv probably he means about this solution:
I found that the stage with which the bugged version is configured can be saved and made into a new safesensor, by using the Checkpoint Model Mixer extension (https://github.com/wkpark/sd-webui-model-mixer).
After getting the desired image with the bugged version:
1) load as model A the bugged one
2) as model B the last one used before switching to the bugged model
3) from box or use "Merge Block Weights for selected blocks" select ALL
3) set the Merge "Block Weights" to:
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
4) run a generation, the image should be the same, and the system create a checkpoint into RAM
5) from Save the current merged model, save the unbugged model
For example, now have a Safesensor that always generates the first image, the one with the white cyborg
@Samael1976 thx for ur time. i will try it. and good work for the new release <3
@Romanowvv No problem my friend! You are welcome ;)
@Samael1976 i send u a req for discord for a couple of question king, i'm XVX
@Romanowvv at the moment I'm little busy for this weekend... I approve now, but I can talk in mondei, sorry.
@Samael1976 sure, thx a lot have a good we
We have understand how it work the bugged (1.60gb) version work, and I think it's a funny and interesting bug.
I try to explain to you with my horrible english
1) In A1111 load a random model (X)
2) Generate an casual image
3) Load Animerge 2.0 Bug version
4) Generate an image and save it
5) Load another Model (Y) - (different to Model X)
6) Generate a casual image
7) Load again Animerge 2.0 Bug version
8) Click over "PNG Info" tab
9) Load the saved image
10) Click over send to "TXT2Img" button
11) Click on Txt2img tab
12) Press generate
Now you have 2 different images with same parameters, seed, cfg, ect ect. using the same model ;)
I think it's interesting, because if you have patience, you can find the style that you like.
I found that the stage with which the bugged version is configured can be saved and made into a new safesensor, by using the Checkpoint Model Mixer extension (https://github.com/wkpark/sd-webui-model-mixer).
After getting the desired image with the bugged version:
1) load as model A the bugged one
2) as model B the last one used before switching to the bugged model
3) from box or use "Merge Block Weights for selected blocks" select ALL
3) set the Merge "Block Weights" to:
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
4) run a generation, the image should be the same, and the system create a checkpoint into RAM
5) from Save the current merged model, save the unbugged model
For example, now have a Safesensor that always generates the first image, the one with the white cyborg
@indrema Great!!! Tomorrow I will try!!!! REALLY BIG BIG THANK YOU!!!
@Samael1976 By the way... My technique certainly saves the state and reintroduces it on the first startup, but still the subsequent results are not stable. It seems that the model is constantly changing with each generation. To reset it, it is necessary to restart A1111. I'm sorry for that.
A very curious behavior.
@Samael1976 After some other tests looks like stable, maybe I've made some mistake before. If you needed I can share my fixed model with you.
@indrema I'm trying! Can I ask which models have you mixed for have the model that can reproduce the first image?
@indrema I'd really love if you can! ❤️ do you need my email? samuele[dot]bonzio[at]gmail[dot]com
Been having a lot of fun with the broken 2.0 checkpoint lol. I tried the steps that @indrema posted but wasn't able to replicate the same result as just loading good checkpoint then broken checkpoint. I used SuperMerger (https://github.com/hako-mikan/sd-webui-supermerger) instead of Checkpoint Model Mixer. Not sure if that matters or not.
One thing I did notice when doing this approach is that the order of the checkpoints matters. If you use the broken checkpoint as "Model A" you end up with something that Model Toolkit still reports as broken. To get something that isn't reported as broken you need to use the good model as "Model A" and the bad one as "Model B". Use the same Merge Block Weights but change 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 to 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1.
After carefully reading the documentation for Model Toolkit I found a process that works for me 100% of the time to get the exact effect as "loading good model then loading broken model".
1. On Toolkit tab pick source as "NEW SD-v1". Click the Load button. Switch to Advanced tab.
2. Change the Class drop down to "UNET-v1". Select the broken checkpoint in the drop-down and click the Import button. You should see something like "Total keys: 686 (1.60 GB), Unknown keys: 686 (1.60 GB)."
3. Now change the Class drop down to "CLIP-v1". Select the good working checkpoint you want to use and click the Import button.
4. Now change Class drop down to "VAE-v1". Leave the same checkpoint from before and just click the Import button.
5. Give your model a good name or just leave the default generated name and click the Save button.
You should end up with a good working model now that has the clip and vae from the "good" checkpoint, which uses the unet from the broken one.
@jmblsmit129 I've already try model toolkit, but for me is not working...
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.







