This tool doesn't specifically race-swap asian to white-caucasian, it just removes asian so your prompts can be more effective with whatever other race or culture you're trying to portray. This is due mostly to heavy training that doesn't tag race or culture.
Follow me to make sure you see new tools like this, or new styles, poses and Nobodys when I post them. More Clutter series coming too. Things move fast on this site, it's easy to miss.
Most of the recent, good, training has been using anime and models trained on asian people and it's culture. Nothing wrong with that, it's great that the community and fine-tunning continues to grow. But those models are mixed in with almost everything now (and because race or culture wasn't specifically tagged) sometimes it might be difficult to get results that don't have asian or anime influence. This embedding aims to assist with that. It can even change anime characters (though that wasn't the intended purpose).
I first created this when trying to make preview images for my South of the Border Style embedding. I was trying to get south american people and culture, but there was a lot of asian culture leaking into the generated images. This embedding fixed that. So it doesn't specifically race-swap asian to white-caucasian, it just removes asian so your prompts can be more effective with whatever other race or culture you're trying to portray.
How to use:
Use the negative prompt primarily, then use the positive prompts only if you need the additional help. There are 3 files to download.
Asian-Less2-Neg: Use this one, place in your negative prompt at strength of 1.0 (more updated embedding to account for more modern models)
Asian-Less-Neg: Use this one, place in your negative prompt at strength of 1.0
Asian-Less: Only use this in the positive prompt if you need the extra strength. On an asian-specific model for example.
Asian-Less-Toon: Only use this in the positive prompt if you need to remove anime-like features in illustrations.
Again, this isn't meant to race-swap, but to help get other race and/or cultures in your image generation without the influence of the asian image training. But like all models, the intended use and what you do with it are separate matters.
Do you have requests? I've been putting in many more hours lately with this. That's my problem, not yours. But if you'd like to tip me, buy me a beer. Beer encourages me to ignore work and make AI models instead. Tip and make a request. I'll give it a shot if I can. Here at Ko-Fi
Description
This is the main embedding, place this in your negative prompt at strength 1.
FAQ
Comments (45)
Could have called it disorientation.
haha
The most important embedding since noise offsets! Makes it possible to use anime / illustration models for photorealistic embeddings. Thanks!
can you make lora version?, so i can merge the lora with asian checkpoint models
haha, that would be genius
Asian beautiful girls are the best, but I also like white beautiful girls, so I appreciate this embedding.
It is great that someone is seeing and realizing this over biasing into the wrong direction.
Though it would be also nice if people would be as harsh to this as for example things like codeformer which have a heavy bias for example into the beautification of faces.
Anyways it is at least nice that some care and don't run around with totally blind eyes :)
In general we have a heavy beautification problem which often goes into ridiculous artificial directions
Asian Models often bring it to the PEAK
A lot also because often people do not understand really the relation between Sampler/Base Model and Upscale Pipeline and it's results in Photospace not to speak of the additional complexity if something like codeformer comes into play.
And lets better not starting to speak of the complexity difference of perception between Photo and Reality.
BTw: Also many of this Negative trained textual inversions have often a very heavy asian bias
and sometimes a bias into beautification and deaging beyond comprehension
And i slowly wonder if KIDS/TEENS are at work here doing these things.
Not to speak of most probably a lot of Pedobears hiding searching for acceptance.
Impressive job, thank you for sharing this!
I've been wanting something like this for months now. I like Asian women as much as the next guy, but there is a sameness now in many models due to mixing in some of the realistic models that were finetuned on data of only Asian women.
most of these AI faces don't look "natural", they look plastic surgery performed on them.
same! though even this inversion can't save majicmixRealistic from my testing. that otherwise great realistic model appears to only know asian people exists :)
@argh I tried out most checkpoints that claim "realistic" images, and photomerge seems to be better than most. use with negative prompt "asian"
I tested this and while the effort is great there is a superior method. Use the following for superior results without this textual inversion:
Positive: Race, nationality. Example (Caucasian, American)
Negative: (Asian)
Place them fairly close to the start of the prompts in both for increased priority.
The issue with this model is it will very heavily bias towards specific physical traits so for most they will not want to use it. However, if you find a prompt you like and need a non-Asian result its worth comparing your results with this prompt I suggest and without the textual inversion to with it to see which produces the result you prefer because the results this textual inversion produces aren't bad just biased. For anyone wanting the ideal solution you do not need this prompt.
'superior' is highly subjective here. By that you mean "the usual method everyone already knows."
First, this isn't meant to replace that method, it's just meant to be another one. It works, and it achieves different results as well, which makes it equally valuable. I've done testing using your "method" and this negative embedding, and even both together. All three are completely different.
There is no 'problem' with this method whatsoever.
There is no 'right' way of doing any of this.
It's all about taking the tools provided and bending them in different ways to make something new.
Second, this embedding helps a lot on models that are heavily trained on asian datasets. I implore you to continue your testing on models such as this:
https://civitai.com/models/25494/brabeautiful-realistic-asians-v5
You'll find that neither method works well on it's own, but combined they do a pretty great job.
Just pointing out that this resource's value doesn't diminish based on your personal tastes and opinions whatsoever, and claiming things are "superior" or "ideal" is putting what you prefer up on a pedestal and can deter others from testing things for themselves. Considering you have zero uploads and zero contributions to this community, I would think anyone would be wise to take your opinion with a grain of salt. Especially when commenting on an upload from someone like @Zovya who has far more established credibility and contributions.
Thanks for the kind words Fenn. Kudon, an embedding is just a series of tokens with varying degrees of strength. That's all a prompt ends up being. Your success with using "Asian" in the negative will only go so far and be model dependent. When Asian people train a face, do you think they're adding "asian" to the tag? Probably not.
Ultimately, a precise and complicated prompt will do just as well as an embedding. So it's more of a convenience or a tool for people that aren't as skilled at prompting. Either way, it's small, lightweight and best of all, free.
@Zovya I used to use the negative asian tag, among others, but have come realize that while it works it is not optimal. Like you say, they don't tag asian, also asian can mean many things, not just a persons face, and so on.
Putting (Asian) in the negative just made them even more Asian for me.
@DannyCool I tend to use "Asian Ethnicity", since the internet is very western biased anything with "ethnicity" is generally coupled with non-caucasian people
My testing indicated that the combined use of both methods in specific amounts based on the weights of the models, the number of tokens in surrounding textual inversions, and the strength of certain LoRA types is, in fact, optimal.
I suggest you continue with a more rigorous experimental regime for even more superior results. In this there can be no preference - only what is optimal. I applaud that you tried. You should be proud of yourself. Continue to learn more a more scientific method, however.
For anyone wanting the optimal solution I suggest genetic testing be performed on your images.
/s
@Fenn You failed to read what I actually said and just want to argue so I'm going to keep it simple and if you continue after this post I will not bother wasting further time. When I stated superior I gave context as to why but also pointed out where this specific textual inversion can still have merit. I clarified quite clearly the textual inversion produces biased results which defaults it to being an inferior method which is an actual fact. It skews to specific types of people of given races and lacks the richer diversity, more accurate, and less biased results of the method I stated. Nothing you stated refutes or is even relevant to anything in my initial comment. Please don't waste time and space on pointless arguments, especially when someone's post was intended to be insightful, helpful, and not trolling or inflamatory.
@thelustriai This depends on the model. Most models you will not want to use both because it can produce inferior results and has a strong bias. For models that are very specifically tailored to Asian results then, as Fenn and you stated, you would want to use both because those models are inherently not capable of properly producing non-Asian results so the textual inversion provides more data to work with and is, essentially, the only option there. However, you are better off not using Asian specific models for such purpose to begin with.
This comment was helpful—I'm not sure why it's getting downvoted.
@shaozi88 Not too sure honestly, but thanks. If you are looking for a good model for this suggestion the best all-rounder I've found is probably Level4 so it may potentially be worth checking out if you haven't already found a better one to use with Positive / negative modifiers for intended race/nationality. https://civitai.com/models/17449/level4
Harvard admissions called...
Harvard needs a " Smaller Noses" TI
Godsend!
God this is such an amazing resource, but I noticed that if I use all 3 (turning anime girls to realistic) and crank them up to 1.7, there seems to be a white crack that appears across their faces. Not sure how to resolve this.
Thank you so much!! I have been trying to find models with a regular cartoon style and 99% of the are anime style. I'm going to try this and see if I can get closer to what I'm looking for.
How do you make something like this?
Would like to echo the question for how you achieve these kind of embeddings and LORAs. Would you consider writing a guide?
most people don't understand what embeddings means. It literally mean to remove something from stable diffusion. Some people remove ugly face, some remove ugly hands. His embedding remove asian face. U train it what to remove pretty much, learn how to make embeddings and problem solved
Toon version is nice, but the training data was not well tagged, lots of disney castles and princess type stuff that isn't removed with negative prompt. Do you have plans to retrain it, or have the training data somewhere? Its a good concept and has potential!
yeah, and as the models evolve, I'll need to revisit these again soon. V2 soon.
Any chance of getting this as a safetensor?
You can convert it yourself
Wanted to say Thank you... I actually have no issue with asian model faces... it's when the asian model faces mix with some generations and make the generation look like it has a 15 year old girl or boy instead of a mature male/female.
Need one of these for 'no childish faces' :D
asian girls in most of AI models all look like they have crap ton of plastic surgery to maximum that any doctor would say "no! its impossibruuu!!"
XL? Please.
Just putting the literal word "Asian" in your negative prompt can often help
Idk if this has already been asked, I used this embedding in pretty much all of my creations on SD but I was wondering there's something similar to this embedding on Civitai to use on the actual site?
Details
Files
Asian-Less-Neg.pt
Mirrors
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Style Asian Less negative embedding .safetensors
Asian-Less-Neg.pt
Asian-Less-Neg.pt
50755_57451_Asian-Less-Neg.pt
asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
noAsian.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
f104205a-6014-4011-8e65-5fc5107fdb35.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian_Less_Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
Asian-Less-Neg.pt
AsianNev.pt
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.











