MCNL is meant to be a convinient way to uncensor Qwen with many concepts included. Please keep in mind that loras that are trained on one concept only will generally have better quality than a multi purpose one like this one.
--UPDATE--
V1.0 is out, it adds a lot of new concepts and improves overall quality while being 4 times smaller in size!
Trigger words: nsfw, cum_on_face, blowjob, cowgirlout, creamp1e, penis, l1ck, missionary, nipples, reversecowgirlpov, vagina
Some context for the trigger words, "cowgirlout" is for the cowgirl position viewed from an outside perspective, "l1ck" is for a woman licking a penis, "reversecowgirlpov" is for the reverse cowgirl position from the man's pov. Everything else is self explanatory.
Enjoy gooners
Description
FAQ
Comments (80)
Another masterpiece from Qwen...
It uses LoRa lighting and works well with 4-8 steps.
How many images did you use to train the dataset?
The dataset for all concepts combined consisted of around 600 images
take the number of concepts and divide (that) into 600.
Test multiple times that DoggyStyle does not work. Can you provide a prompt word case?
doggystyle was the one concept I forgot to test and you're kind of right. I just tested it, and it is very hard to get a doggytyle image (the few times that it generated it successfully it didn't look very good either) and it seems to be getting confused with the reversecowgirlpov concept (since it keeps making the man lay down instead of stand up or stay on his knees). The issue seems to be that the 2 concepts are kind of similar visually, and also the fact that I don't have many images for the doggystyle concept.
I will just remove it from the list of concepts/triggers, in the future I might make a v2 with more balanced data that would hopefully fix this issue.
it does nipples fine, but vagina is not there yet, they come out very deformed. Reminds me of the types of vagina when the first loras nudes came out for flux and SDXL. Still needs time to cook more.
"still needs time to cook more" the lora has 10 concepts... if I make a lora for only one concept (e.g. vagina), it will render them well almost always. The purpose of this lora wasn't maximum quality, the focus was quantity, it was a fun experiment for me, and a fairly convinient way to do a lot of different nsfw stuff decently. Nipples looks fine because they're fairly simple, while vaginas have lots of detail, folds etc, which makes them hard to incorporate in a lora with so many concepts.
The truth is though, qwen requires pretty heavy hardware to train, and each lora is much more expensive to make than other models I've trained. Which is why I decided to do one lora with many concepts, since making many loras for each one would've cost me a lot more.
Mind you I do this completely for free, I don't ask for donations, I don't have a patreon or any donation page setup. If you want super good vagina rendering, or super good cum rendering or whatever else this is simply not the lora for you, since the focus is on generalization, not on one concept expertize.
jorkingtoncityshallwe Can it be trained on 24GB? Even if slower?
brnlittokhoes3110 Ostris just released a new update that enables you to train on a 24GB card, but the quality is quite a bit worse because his method leverages 3bit training with a specialized lora to "restore" the quality. The lora that restores the quality isn't magic, and ofc it isn't perfect. So loras made with this method end up having more artifacts overall and it probably won't be able to learn small details. Here's his video on it https://www.youtube.com/watch?v=MUint0drzPk
jorkingtoncityshallwe Im on a training run on a vagina lora as we speak with that method, here is the progress. I think with more time and steps you can get good results. I think I bungled it up by setting the steps too low, will likely have to do a fresh run with at least 10k steps. The settings seems to be good, nothing is overcooking, its just slow cooking, so with more steps I think it will be good. https://civitai.com/posts/20990716
fox23vang226 yes if you're training only one concept it can work, did you even read my previous reply? My lora contains 10 concepts and it has been trained for almost 31k steps, training it for longer wouldn't help much, in my case it's not the amount of steps that is the issue
jorkingtoncityshallwe Yes I did read it. I did get tunnel vision because you typically dont train multiple concepts in a lora due to its extreme complexity. The distinction and difficulty between a single-concept and multi-concept Lora is a big one. It's interesting that even with a massive 31k steps, the details still aren't coming through for a multi-concept lora. In my vagina training run at just 1800 steps im already seeing small and noticeable improvements from each 300 steps. Have you considered trying a different learning rate or maybe adjusting the network_alpha and network_dim to see if that helps with the detail retention across all the concepts?
fox23vang226 v0.1 had a higher network dim/alpha (which is why it was also significantly bigger in size), and it didn't perform much better, if anything v1.0 which is about 4 times smaller seems to be better.
Higher learning rates would just start cooking either all concepts or specific concepts only (depending on how high/low it was set), the learning rate that v1.0 was trained on was close to the max possible without cooking any of the concepts, but perhaps I could've done a bit higher to save on training time.
Honestly I think if I make a v2.0 I would try to train it without quant (kinda pricy because I would probably need 80gb vram, perhaps more, not sure), I would probably try a dim/alpha somewhere between v1.0 and v0.1, and I would try to improve the dataset to be a bit more balanced.
Anyway good luck with your lora, I hope it turns out well
any dildo insertions?
this lora has not been trained on any dildo related content.
you are my god!!!!!!!!!!!!!you are
the best genius ,in 21st century
great for general nsfw, but any tips on not getting plastic skin? i get more realistic looking skin with stock qwen model.
You could try doing a second pass using Wan 2.2 or Flux Krea which handles skin texture better
Qwen succeed in returning to the 'sdxl' era
Will there be a version for qwen image edit ? Doesn t work with it at the moment
Currently qwen image edit is not supported in any training tool to my knowledge. Also qwen in general costs too much to train properly, I could consider it once support for the model is available, but I don't promise anything.
@jorkingtoncityshallwe Okk I understand , thanks for the reply !
@jorkingtoncityshallwe i havent tried this yet myself but apparently qwen image edit lora training is here: https://www.reddit.com/r/StableDiffusion/comments/1mvph52/qwenimageedit_lora_training_is_here_we_just/
I tried to use it to replace the original 4-step-lora that provided by the Official workflow, and it works!
@jorkingtoncityshallwe Qwen Image Edit is supported by Ai-Toolkit.
@Skuuurt I know, it wasn't supported when I first wrote my comment :D
hi i tried using your lora but the output of the photos were always very bad quality, but yet when i do not use it, the output was ok, i was using Qwen image Q4_1 GGUF, not too sure if it matters
the replies in this comment section might help since they were experiencing the same issue and managed to solve it https://civitai.com/models/1851673/mcnl-multi-concept-nsfw-lora-qwen-image?modelVersionId=2105899&dialog=commentThread&commentId=898889
When using with one or two other Loras, image degradation and yellow filter comes fast with this lora. Any advice on how to avoid this ?
I have been able to use this lora along side with my style loras without an issue. It all depends on the loras you're using, just play around with the weights until you get a good result. Also if you're using a heavily quantized version of the model and the 4-8 step loras you're more likely to have issues. FP8 with 20ish steps will be consistently better.
@jorkingtoncityshallwe It was indeed the 8 step Lora. Thanks a lot for your guiding answer
Question:
As i see youre mention that trigger words are working.
How and why should they ?
Sure youre not just "overtraining" the unet ?
I ask, because normally on trainers as example Ai-ToolKit you are only training the unet.
Not the Clip.
(Or with what are you training) ?
Thats why on ComfyUI, if you set the clip_strength of the LoRA Loader even to 1000 nothing changes.
This means, youre not touching one time the clip.
Means -> You cant train new trigger words.
Have you ever tried to train the text_encoder ?
Qwen-Image devs shared the training code/way.
First of all you can train the text encoder on ai-toolkit, you just need to enable the option. Also yes this lora has trained both unet and text encoder.
As for this question "This means, youre not touching one time the clip.
Means -> You cant train new trigger words.". This is completely false, even if you train the unet only, your trigger word will still work, training the text encoder as well ofc makes the model understand what it's doing better. (this applies for one concept training only btw, for mutliple concepts I believe training the text encoder as well is pretty crucial, but I could be wrong). I won't go into detail why trigger words work without training the text encoder because this reply will become too long, but literally just train a lora yourself on a single concept with a trigger word and then use the lora with and without the trigger word, there will be a clear difference (as long as you didn't overtrain it).
@jorkingtoncityshallwe Actually AI-Toolkit dev somewhere mentioned that you cant train the Encoder :`D
The settings there, yes.
But it doesnt do anything.
Please load your lora and push clip_strength to 1000.
You will see it wont change anything.
Set the model_strength to 1000 and you get an overbaked shit.
On SDXL the clip_strength was doing something, because you trained the text_encoder too.
Yes actually i know what you mean with the trigger words.
You train the text_encoder frozened, so no new words can be learned.
(Thats how i understand it.)
So if you say as example you try to train:
"A woman getting fucked in her ass" but you try to train it as "A woman getting 23fgd" it wont work.
In SDXL 23fgd would be the "getting fucked in her ass" because the clip would learn it.
100% in Qwen and Flux as example it wont happen.
Never ever.
I tried it 100 times in the last times.
And for models like flux, etc. you have to find all time the working trigger words.
Have you ever tried to just write your "triggers" ?
Without combination of "woman, naked" or so ?
I would say they wont do a big different when you write them alone.
I will try soon to train the clip as standalone on the same dataset.
Have you applied "Differential Output Preservation"to the training ?
Somehow this should be a way to train trigger words again in ai-toolkit.
@LDWorksDV You know what you might be right, I will do some testing, if ai-toolkit's text encoder training doesn't work then a custom script would probably be best, and shouldn't be that hard to do (although having a pretty GUI is always nice haha). As for my sdxl based stuff, I used onetrainer, which I personally like quite a bit and seems to at the very least have working text encoder training :D.
Honestly though I don't think that I am gonna bother with qwen again cuz that thing takes insane hardware to train.
@jorkingtoncityshallwe Yeah know what you mean.
I mean on runpod a a100 is using 37gb vram with default settings.
I guess with 40gb you can train there.
L 40 = 0.93$ per hr.
Setting everything up tooks 10min.
(I train like this)
Yes not cheap, but its "okei" xD
So yeah, i will do it with a custom script made by GPT soon.
Will try it.
This would theoreitcly mean that a nudity LoRA like Lustify could be possible with Qwen-Image (With enough training)
Like:
2000 image finetune Qwen-Image
2000 image finetune Qwen-VL-2.5
That the text encoder even lears what trigger are.
Would be amazing.
@LDWorksDV yeah I just tested it with the same prompt/seed and different clip strenght, the resulting image was exactly the same. So you're right ai-toolkit's text encoder training doesn't work properly. Honestly kind of impressive that this lora kind of works considering that the text encoder is essentailly clueless about all the nsfw shit lol.
I have not used "Differential Output Preservation" btw, but in theory that should be just a way to preserve existing concepts when introducing new ones (so basically prevents overfitting and "forgetting" of concepts).
@jorkingtoncityshallwe Yeah actually this problems were also there on Flux already and nobody was beliving me haha
So custom_trigger were never working.
The model learns "somehow" what it sees and you have to find the right trigger.
Yeah youre right.
Then this doesnt makes sense.
I will try to train the clip soon.
Would be huge.
Would be interesting if its possible later to say to merge the Qwen-VL-2.5 LoRA with the Qwen-Image Lora to get the text_encoder layers into the image lora so we can use it together in one "LoRA_loader" again.
I thoug about what i said and i guess you have to do it like:
1. Train a LoRA on QWEN-VL-2.5 / or / t5xxl
2. Merge this into the text_encoder
3. Trian a LoRA on the Unet with the new merged text_encoder. (Now you can use the known new Trigger words.)
4. Use in the end the new text_encoder in Comfy + the Unet model with LoRA.
This would made the most sense for me.
@LDWorksDV so just a little update if you're curious, I managed to train the text encoder on the same dataset as the lora, the results were the following:
- in some cases completely mangled/bad looking results were completely fixed with the nsfw text encoder (fairly rare)
- in some other cases it would make the output a bit worse (fairly rare as well)
- the overall nsfw "understanding" and prompt adhesion seemed to be slightly improved
I do want to note that I tested the text encoder on it's own for captioning my dataset and it was fairly accurate so the training its self was successful. BUT I do think that all the effort was not worth it, since the improvements are overall very minor, only in some very rare cases you get a big improvement. I might try training the text encoder on a bigger dataset at some point in the future (since this lora's dataset is not big enough for fully finetuning a text encoder I think :D), but for now I am not gonna bother with qwen and will continue working on my YARI model.
@jorkingtoncityshallwe Interesting to know !
Want to share me the training script ?
I would run a bigger training on a h100 or b200 if you want.
@LDWorksDV you can just use LLaMA-Factory to train the text encoder, just make sure to select the correct model, and make a dataset that's compatible with the required format (everything is explained in the readme of the github repo)
@jorkingtoncityshallwe Ez (y)
Is there a way to get Qwen to generate deepthroat blowjobs and not just standard ones? I can seem to get around this limitation with prompting alone.
sure, train your own deepthroat lora lmao
@jorkingtoncityshallwe I meant are there any fairly reliable existing methods? I don't want to train it myself, or I would do that.
@clevnumb A diffusion model can only generate what it has already been trained on (and also kind of mix and match the concepts it has learned). If you want it to reliably generate some specific concept like a deepthroat then you have to train it to do that.
@jorkingtoncityshallwe I'm genuinely surprised nobody has yet. There are DT Lora for every other model on here I believe. I'll just wait, thanks.
Does the qwen edit model work?
Yes it does.
Thanks for the Lora!
I wanted to try it out (of course!) and it crashes Draw Things every time.
If you like I'll send the crash report.
Cheers!
there is no need to send me the crash report as there is nothing for me to do, it's up to the developers of draw things to support qwen loras, in comfyui it works just fine.
Indeed, I found out that it crashes with every Qwen lora I use. Now why did I try your lora first? Question for my psychologist :-)
Cheers!
After using this Lora, the human body is somehow severely broken. The face loses its consistency, and there are extra arms and legs. Sometimes it even becomes a mass of the human body.
In my opinion, this LORA doesn't seem suitable for generating images with “non-explicit prompts.” While it generates well enough with “explicit prompts” even without trigger words, it often generates incorrectly when used with non-explicit prompts.
Output quality is .....
Oh cry me a river dude, make a better one yourself or use something better then. I simply don't have good enough hardware to keep working on Qwen and smartasses like you make it much more unlikely that I will ever touch this model again
@jorkingtoncityshallwe r u a psychopath or something wt* u r mumbling dude. I just said output quality is bad.. and it's just a simple feedback!!!
@deditz111802 first time I see anyone censor "wtf" to "wt*" lmao, look dude if you wanted to actually give constructive criticism you wouldn't have worded it in such a way, "most of my outputs come out pixelated" is constructive criticism, "Output quality is ....." just comes off as passive agressive bs
@jorkingtoncityshallwe i see,, i really apologise for it if i put this in a wrong way,, i just simply write it down didn't think about it much. And i got really angry seeing ur reply by thinking who the f is this guy replying out of nowhere, i just noticed that the this is ur model. Sorry dude my bad
..is shit.
@amazingbeauty hey man at least I contribute to the community, this is def not my best work and I admit it.
Perhaps you can start talking shit once you make something yourself, cuz currently you have made 0 models and you've made 1 post in which you sound like a moron, while begging people to make a specific lora for you lmao.
@jorkingtoncityshallwe keep it up man don't take shit from the scrubs lmao
@jorkingtoncityshallwe this is your reply when someone said directly 'shit' and when i just simply comment that the output quality is... (A normal feedback comment Ur rage gone suddenly up and give a rude reply wtf
@deditz111802 "a normal feedback comment" you delusional passive agressive rat, "Output quality is ....." is as passive agressive as it gets, you legit have no clue what constructive criticism is.
Also idk why you're so bothered by how I answered to him, it's not like I was particularly nice to him either, I literally told him that he's a moron.
Anyway feedback noted I should start being more mean to people so I can make you happy
XD
@deditz111802 again the quality is shit sure and not realistic , people train models on ai shit ...that mentality that keep the shit more shitier.
Recommended settings? Cfg, steps, sampler, etc?
The metadata and the workflow is attached to all the preview photos so you can see what I have used, generally 2.5 CFG, 20+ steps if you have patience, Euler/Simple work fine and 3.1 shift.
@jorkingtoncityshallwe can I use this with lightning loras?
求告知这是什么功能
How can we prevent the female vagina from appearing on the male testicles?
By not voting Democrat.
suggested prompts for p3n1s generation? everything I enter just generates a massive vag hanging off the dude...
Honestly I haven't bothered with qwen and this lora in general for a long time, there are some pictures generated by other people in the gallery with penises so try to copy their prompts I guess. Or try combining this with a penis specific lora.
gives me very weird artefacts and bad overall results on Qwen2512 Text2Image. Why doesnt it work for me :(
this was trained on the first version of qwen, lightning loras/versions can have a bad impact also if you use some super low quant that could have an impact too. All gens that you see in the preview images are with fp8 qwen (the first version released).
Even when it works though it's not that great as you can see from the preview images, if all you care about is nudity I'd recommend SDXL based models.
hope for zimage vision
Details
Files
qwen_MCNL_v1.0.safetensors
Mirrors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0 (1).safetensors
Multi Concept NSFW Lora.safetensors
qwen-image-edit-plus-nsfw-lora.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
QWEN_MCNL.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_Multiconcept NSFW NL_v1.0.safetensors
qwen_MCNL_v1.0 (1).safetensors
qwen-image-edit-plus-nsfw-lora.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
mcnl_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
[2509, inconsistent] MCNL (Multi Concept NSFW Lora) [Qwen Image].safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
qwen_MCNL_v1.0.safetensors
Available On (2 platforms)
Same model published on other platforms. May have additional downloads or version variants.

















