April 28, 2024: added V2 Rebirth pruned
v2:REBIRTH
Thanks to S6yx for the creation of this beautiful model. Enjoyed by millions. With their permission, I, Zovya, will be maintaining it moving forward.
April 4, 2024: fp16 and +VAE added
April 2, 2024: Rebirth
Update 3: Disclaimer/Permissions updated
Update 2: I am no longer maintaining/updating this model
Update 1: I've been a bit burnt out on SD model development (SD in general tbh) and that is the reason there have not been an update. Looking to come back around and develop again by next month or so.Thank you everyone who sends reviews and enjoy my model
Pay attention to the About this version section of model page for specific version information. ➡️➡️➡️➡️➡️
Model Overview:
rev or revision: The concept of how the model generates images is likely to change as I see fit.
Animated: The model has the ability to create 2.5D like image generations. This model is a checkpoint merge, meaning it is a product of other models to create a product that derives from the originals.
Kind of generations:
Fantasy
Anime
semi-realistic
decent Landscape
LoRA friendly
It works best on these resolution dimensions:
512x512
512x768
768x512
VAE:
Prompting:
Order matters - words near the front of your prompt are weighted more heavily than the things in the back of your prompt.
Prompt order - content type > description > style > composition
This model likes: ((best quality)), ((masterpiece)), (detailed) in beginning of prompt if you want anime-2.5D type
This model does great on PORTRAITS
Negative Prompt Embeddings:
Make use of weights in negative prompts (i.e (worst quality, low quality:1.4))
Video Features
Olivio Sarikas - Why Is EVERYONE Using This Model?! - Rev Animated for Stable Diffusion / A1111
Olivio Sarikas - ULTRA SHARP Upscale! - Don't miss this Method!!! / A1111 - NEW Model
AMAZING SD Models - And how to get the MOST out of them!
Disclaimer (Updated 10/31/2023):
The license type is CC BY-NC-ND 4.0
Do not sell this model on any website without permissions from creator (me)
Credit me if you use my model in your own merges
You can use derivative models which uses ReV Animated for Buzz points and site-based currency that does not convert over to real world currency.
Do not use this model to monetize on other platforms without expressed written consent.
Description
This version is meant for inpainting and outpainting.
Like the work, you can support me here:
https://ko-fi.com/s6yx0
FAQ
Comments (79)
This model is really awesome!
How to download new version? CivitAi offers an old version, though new was uploaded
theres no new one besides v1.1, the one uploaded newer than that is 1.1-inpainting which isnt for generating images but for fixing images in inpainting
Great model and ultra versatile.
Can you add, if possible, a smaller 2gb pruned fp16 no-baked-vae safetensor version of the v1.1 ?
Just want to say one word:6
I think this is the best model ever! i love it..
But one question. In all the graphics I've created I can't get rid of the subject's face. Let me explain better .
If I want to create a subject wearing a cyberpunk-style closed helmet the AI always generates a body and a face wearing the helmet but the face is always visible. I would like the face to be hidden by a closed helmet! I hope I was clear! I tried to insert negative commands to eliminate the visible face but without success
just do some inpainting afterwards
What is CLIP 2? I can't find anything about it.
@cromoose I know what CLIP is, but what is CLIP 2? Is it just openCLIP? Which openCLIP?
@haodocowsfly clip skip 2...
I don't understand it either. Everywhere they write "Use CLIP skip: 2", but no one writes where it should be used. In the Easy Diffusion v2.5 settings, I did not find these settings anywhere. It should be written in a text prompt???
/Settings/Stable Diffusion, bottom of page
@Relick There are no such settings in ED 2.5
@nfjdsngjf696 There is nothing in these videos about the settings in Easy Diffusion 2.5
any plans for a new version of this? it's really good!
?? this model is 2 weeks old. Its new
@s6yx oh wtf my mistake, my bad... i was looking at the v1.0 didn't see the 1.1 was just released recently
@honfoetrash793 if it's good why new version? If it's not broken don't fix it
jk of course
@Rowlinson it's just that it's so good that recently i even started merging other models and using Rev as a base and it game me some solid results, it's basically one of the top models for me, but yeah lol v1.1 was big fix for a lot of the stuff that felt weird at 1.0 for me, if ever there's a new version or whatever it'd be probably fucking gold.
thx o7
@honfoe I ask him on reddit he said this is not yet finish, it was release early, so yup there will be more.
what is this 1REVA_REV_1.1c ?
its the current model available (1.1). Just the naming scheme i use when testing
The model is great and i would really like to use it at its best but... i keep getting this watercolored results, clearly i'm missing something in my SD.
I've read about kl-f8-anime2.ckpt and orangemix.vae.pt
Once i download them where do i put them? in the model folder? i'm a newbie, i'd like a "step by step" guide pleease
This is a fairly good tutorial.
https://stable-diffusion-art.com/how-to-use-vae/
(They have many other tutorials there as well..)
But basically put VAE here: stable-diffusion-webui/models/VAE
And then:
To use a VAE in AUTOMATIC1111 GUI, go to the Settings tab and find a section called SD VAE (Use Ctrl+F if you cannot find it). In the dropdown menu, select the VAE file you want to use.
where is the vae any one help please
Many seem to be using the ki-f8-anime2 VAE
https://huggingface.co/hakurei/waifu-diffusion-v1-4/blob/main/vae/kl-f8-anime2.ckpt
Is there a CKPT format? My program can only accept CKPT format
Having a problem with v1.1 on my local webui. no matter what I try this checkpoint generates blank black images. I've used tons of other checkpoints with 0 problems with multiple VAEs but only this (and Zovya's RPG) are causing this error.
Tried editing the webui user bat to add --no-half-vae --no-half --xformers --disable-nan-check, and just about every other 'solution' I'd find ends up not working.
https://i.imgur.com/ajXwDge.png
try removing --no-half and --disable-nan-check
@s6yx unfortunately I just get an error that says
'NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.'
which would just put me back at square 1. I appreciate the suggestion though
My notes:
1. What is bad-artist-anime? I only have the bad-artist embedding, so a link in the description of all embeddings used in the promos would be helpful, thanks :)
2. NG_DeepNegative_V1_75T is better than NG_DeepNegative_V1_4T. The former tends to give very "procedurally ordered" vegetation, while the latter gives more randomized results.
1. bad-artist-anime is a embedding which seems to be taken down
2. I use this embeddding no need to link it to me since i use it on most of my generations
@s6yx
1) Unfortuntate :(
2) I added the link for other peeps, since they might not know where to get it :)
BTW, thanks for the reply, and also thanks for the great model!
https://huggingface.co/nick-x-hacker/bad-artist/tree/main bad-artist and bad-artist-anime are both here. well, that's where i got them anyway lol
hands trein please!
Can you guys give me some best VAE for this checkpoint plz
second this, the model looks great but its a merge, and the instructions for the VAE are not clear, lazy.
Hello, sorry or the basic question. What VAE could I use for version 1.1?
very good generally but bad at hands and feet
Very good, but hands are terrible. If you could train them, that model would be perfect!
I want to train my own checkpoint model. I have about 10,000 high-quality pictures. How can I do it? Does anyone know? If you provide relevant technical documents or solutions, I will be very grateful
@kyactive thanks,Let me try
I want to train my own checkpoint model. I have about 10,000 high-quality pictures. How can I do it? Does anyone know? If you provide relevant technical documents or solutions, I will be very grateful
youtube can help you with that
corona
@1234popo405 If you want to help me, I will thank you, maybe some people will not, but you can't say that China is all like this
@woncho thanks,I found some tutorials and am trying
@s6yx first of all it's best and my favorite model!
I create animatrix, based on your model. It has problems with VAE (clip skip = 2). I think i find a way how 2 fix this.
- mix Colorful (for example) with rev Animated with weighted sum, with 0.01 (we just want 2 fix, not merge)
- remove VAE (model convertor)
- add VAE (WSum merge, no interpolation)
As i remember it will fix (model will colorful with clip skip 1). But i am not sure. After that i mix with script with my model with some share.
Another way - just mix with (play with share) animatrix with ReV
Good luck!
I love this model, but it seems to only crank out one type of woman - a buxom model with wide hips and large breasts. I can SOMETIMES get it to craft more normal looking people, but it's really heavily weighted towards that "idealized" body. Great for certain applications but I wish this model had just a little more range, or it was clearer how to get a wider range of bodies.
hi, may i ask which vae do you use?
go to model v1.0 description and you will see...
How good is this model with DnD creatures? The Dungeons and Diffusion one isn't the greatest
install it and try it out
@s6yx It's tough to get non humans right. But i'm using it for sure for my human and humanoid NPCs. No better model for that that is for sure
I REDID THE MODEL INFO SECTION. THERE IS NO EXCUSE FOR REDUNDANT VAE QUESTIONS TO EXIST IN THE COMMENTS ANYMORE
Ah, I was just about to ask! Do I need those VAEs for good image generation, or are you simply informing that you used them in the training process, and this checkpoint doesn't need to be paired with any VAE?
@TernaryM01 the former
I think you may have been overly irritated when you did this lol, I don't think blessed2.vae.pt is a valid URL. Hoping that it saves you clicks- it's NoCrypt/blessed_vae · Hugging Face
@mindframe lol oh you could tell. thanks for headsup, fixed in the main info section
Sorry, got a quick chuckle from this...Youre an absolute legend fam, dont bother with the VAE orcs.
Again, you got all my admiration and respect for these great models. (made a meme on civitai discord memes topic inspired from your comment)
@s6yx also seems like orangemix.vae.pt link is invalid too
@UselessMeaning thank you, fixed
getting really odd results I hunted down the VAE files but it kind of got worse
you did not read the model description. I liked the VAE.....
is this good for imagetoimage?
Details
Files
revAnimated_v11-inpainting.safetensors
Mirrors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
7371_revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
revAnimated_v11-inpainting.safetensors
10_revAnimated_v11-inpainting.safetensors


