CivArchive
    MCNL (Multi Concept NSFW Lora) [Qwen Image] - v1.0
    NSFW
    Preview 93788937
    Preview 93972166
    Preview 93972161
    Preview 93788929
    Preview 93788932
    Preview 93788930
    Preview 93972229
    Preview 93788928
    Preview 93972162
    Preview 93788931
    Preview 93972167
    Preview 93788936
    Preview 93788933
    Preview 93788934
    Preview 93972158
    Preview 93788935
    Preview 93972159
    Preview 93972160

    MCNL is meant to be a convinient way to uncensor Qwen with many concepts included. Please keep in mind that loras that are trained on one concept only will generally have better quality than a multi purpose one like this one.

    --UPDATE--

    V1.0 is out, it adds a lot of new concepts and improves overall quality while being 4 times smaller in size!

    Trigger words: nsfw, cum_on_face, blowjob, cowgirlout, creamp1e, penis, l1ck, missionary, nipples, reversecowgirlpov, vagina

    Some context for the trigger words, "cowgirlout" is for the cowgirl position viewed from an outside perspective, "l1ck" is for a woman licking a penis, "reversecowgirlpov" is for the reverse cowgirl position from the man's pov. Everything else is self explanatory.

    Enjoy gooners

    Description

    FAQ

    Comments (80)

    larflarfAug 12, 2025· 8 reactions
    CivitAI

    Another masterpiece from Qwen...

    It uses LoRa lighting and works well with 4-8 steps.

    MinipaAug 12, 2025· 2 reactions
    CivitAI

    How many images did you use to train the dataset?

    jorkingtoncityshallwe
    Author
    Aug 12, 2025· 3 reactions

    The dataset for all concepts combined consisted of around 600 images

    pursuit_of_beautyAug 13, 2025

    take the number of concepts and divide (that) into 600.

    armstrongli2000Aug 13, 2025· 2 reactions
    CivitAI

    Test multiple times that DoggyStyle does not work. Can you provide a prompt word case?

    jorkingtoncityshallwe
    Author
    Aug 13, 2025· 3 reactions

    doggystyle was the one concept I forgot to test and you're kind of right. I just tested it, and it is very hard to get a doggytyle image (the few times that it generated it successfully it didn't look very good either) and it seems to be getting confused with the reversecowgirlpov concept (since it keeps making the man lay down instead of stand up or stay on his knees). The issue seems to be that the 2 concepts are kind of similar visually, and also the fact that I don't have many images for the doggystyle concept.

    I will just remove it from the list of concepts/triggers, in the future I might make a v2 with more balanced data that would hopefully fix this issue.

    fox23vang226Aug 14, 2025· 5 reactions
    CivitAI

    it does nipples fine, but vagina is not there yet, they come out very deformed. Reminds me of the types of vagina when the first loras nudes came out for flux and SDXL. Still needs time to cook more.

    jorkingtoncityshallwe
    Author
    Aug 14, 2025· 8 reactions

    "still needs time to cook more" the lora has 10 concepts... if I make a lora for only one concept (e.g. vagina), it will render them well almost always. The purpose of this lora wasn't maximum quality, the focus was quantity, it was a fun experiment for me, and a fairly convinient way to do a lot of different nsfw stuff decently. Nipples looks fine because they're fairly simple, while vaginas have lots of detail, folds etc, which makes them hard to incorporate in a lora with so many concepts.

    The truth is though, qwen requires pretty heavy hardware to train, and each lora is much more expensive to make than other models I've trained. Which is why I decided to do one lora with many concepts, since making many loras for each one would've cost me a lot more.

    Mind you I do this completely for free, I don't ask for donations, I don't have a patreon or any donation page setup. If you want super good vagina rendering, or super good cum rendering or whatever else this is simply not the lora for you, since the focus is on generalization, not on one concept expertize.

    brnlittokhoes3110Aug 15, 2025

    jorkingtoncityshallwe Can it be trained on 24GB? Even if slower?

    jorkingtoncityshallwe
    Author
    Aug 15, 2025

    brnlittokhoes3110 Ostris just released a new update that enables you to train on a 24GB card, but the quality is quite a bit worse because his method leverages 3bit training with a specialized lora to "restore" the quality. The lora that restores the quality isn't magic, and ofc it isn't perfect. So loras made with this method end up having more artifacts overall and it probably won't be able to learn small details. Here's his video on it https://www.youtube.com/watch?v=MUint0drzPk

    fox23vang226Aug 16, 2025

    jorkingtoncityshallwe Im on a training run on a vagina lora as we speak with that method, here is the progress. I think with more time and steps you can get good results. I think I bungled it up by setting the steps too low, will likely have to do a fresh run with at least 10k steps. The settings seems to be good, nothing is overcooking, its just slow cooking, so with more steps I think it will be good. https://civitai.com/posts/20990716

    jorkingtoncityshallwe
    Author
    Aug 16, 2025

    fox23vang226 yes if you're training only one concept it can work, did you even read my previous reply? My lora contains 10 concepts and it has been trained for almost 31k steps, training it for longer wouldn't help much, in my case it's not the amount of steps that is the issue

    fox23vang226Aug 16, 2025

    jorkingtoncityshallwe Yes I did read it. I did get tunnel vision because you typically dont train multiple concepts in a lora due to its extreme complexity. The distinction and difficulty between a single-concept and multi-concept Lora is a big one. It's interesting that even with a massive 31k steps, the details still aren't coming through for a multi-concept lora. In my vagina training run at just 1800 steps im already seeing small and noticeable improvements from each 300 steps. Have you considered trying a different learning rate or maybe adjusting the network_alpha and network_dim to see if that helps with the detail retention across all the concepts?

    jorkingtoncityshallwe
    Author
    Aug 16, 2025· 1 reaction

    fox23vang226 v0.1 had a higher network dim/alpha (which is why it was also significantly bigger in size), and it didn't perform much better, if anything v1.0 which is about 4 times smaller seems to be better.

    Higher learning rates would just start cooking either all concepts or specific concepts only (depending on how high/low it was set), the learning rate that v1.0 was trained on was close to the max possible without cooking any of the concepts, but perhaps I could've done a bit higher to save on training time.

    Honestly I think if I make a v2.0 I would try to train it without quant (kinda pricy because I would probably need 80gb vram, perhaps more, not sure), I would probably try a dim/alpha somewhere between v1.0 and v0.1, and I would try to improve the dataset to be a bit more balanced.

    Anyway good luck with your lora, I hope it turns out well

    redsnailAug 14, 2025
    CivitAI

    any dildo insertions?

    jorkingtoncityshallwe
    Author
    Aug 14, 2025· 1 reaction

    this lora has not been trained on any dildo related content.

    ganthery1761495Aug 15, 2025· 20 reactions
    CivitAI

    you are my god!!!!!!!!!!!!!you are

    the best genius ,in 21st century

    TheWorldIsBlueAug 16, 2025
    CivitAI

    great for general nsfw, but any tips on not getting plastic skin? i get more realistic looking skin with stock qwen model.

    Cedar8043Aug 17, 2025· 4 reactions

    You could try doing a second pass using Wan 2.2 or Flux Krea which handles skin texture better

    Eidolon3DAug 21, 2025· 16 reactions
    CivitAI

    Qwen succeed in returning to the 'sdxl' era

    lopezd20Aug 22, 2025· 8 reactions
    CivitAI

    Will there be a version for qwen image edit ? Doesn t work with it at the moment

    jorkingtoncityshallwe
    Author
    Aug 22, 2025· 3 reactions

    Currently qwen image edit is not supported in any training tool to my knowledge. Also qwen in general costs too much to train properly, I could consider it once support for the model is available, but I don't promise anything.

    lopezd20Aug 22, 2025

    @jorkingtoncityshallwe Okk I understand , thanks for the reply !

    btsfAug 23, 2025· 1 reaction

    @jorkingtoncityshallwe i havent tried this yet myself but apparently qwen image edit lora training is here: https://www.reddit.com/r/StableDiffusion/comments/1mvph52/qwenimageedit_lora_training_is_here_we_just/

    154274174199Aug 24, 2025

    I tried to use it to replace the original 4-step-lora that provided by the Official workflow, and it works!

    SkuuurtAug 24, 2025

    @jorkingtoncityshallwe Qwen Image Edit is supported by Ai-Toolkit.

    jorkingtoncityshallwe
    Author
    Aug 24, 2025

    @Skuuurt I know, it wasn't supported when I first wrote my comment :D

    leonvizAug 23, 2025
    CivitAI

    hi i tried using your lora but the output of the photos were always very bad quality, but yet when i do not use it, the output was ok, i was using Qwen image Q4_1 GGUF, not too sure if it matters

    jorkingtoncityshallwe
    Author
    Aug 23, 2025

    the replies in this comment section might help since they were experiencing the same issue and managed to solve it https://civitai.com/models/1851673/mcnl-multi-concept-nsfw-lora-qwen-image?modelVersionId=2105899&dialog=commentThread&commentId=898889

    SkuuurtAug 24, 2025
    CivitAI

    When using with one or two other Loras, image degradation and yellow filter comes fast with this lora. Any advice on how to avoid this ?

    jorkingtoncityshallwe
    Author
    Aug 25, 2025

    I have been able to use this lora along side with my style loras without an issue. It all depends on the loras you're using, just play around with the weights until you get a good result. Also if you're using a heavily quantized version of the model and the 4-8 step loras you're more likely to have issues. FP8 with 20ish steps will be consistently better.

    SkuuurtAug 26, 2025

    @jorkingtoncityshallwe It was indeed the 8 step Lora. Thanks a lot for your guiding answer

    LDWorksDVSep 1, 2025
    CivitAI

    Question:
    As i see youre mention that trigger words are working.

    How and why should they ?
    Sure youre not just "overtraining" the unet ?

    I ask, because normally on trainers as example Ai-ToolKit you are only training the unet.
    Not the Clip.
    (Or with what are you training) ?

    Thats why on ComfyUI, if you set the clip_strength of the LoRA Loader even to 1000 nothing changes.

    This means, youre not touching one time the clip.
    Means -> You cant train new trigger words.

    Have you ever tried to train the text_encoder ?
    Qwen-Image devs shared the training code/way.

    jorkingtoncityshallwe
    Author
    Sep 1, 2025

    First of all you can train the text encoder on ai-toolkit, you just need to enable the option. Also yes this lora has trained both unet and text encoder.

    As for this question "This means, youre not touching one time the clip.
    Means -> You cant train new trigger words.". This is completely false, even if you train the unet only, your trigger word will still work, training the text encoder as well ofc makes the model understand what it's doing better. (this applies for one concept training only btw, for mutliple concepts I believe training the text encoder as well is pretty crucial, but I could be wrong). I won't go into detail why trigger words work without training the text encoder because this reply will become too long, but literally just train a lora yourself on a single concept with a trigger word and then use the lora with and without the trigger word, there will be a clear difference (as long as you didn't overtrain it).

    LDWorksDVSep 1, 2025

    @jorkingtoncityshallwe Actually AI-Toolkit dev somewhere mentioned that you cant train the Encoder :`D
    The settings there, yes.
    But it doesnt do anything.

    Please load your lora and push clip_strength to 1000.
    You will see it wont change anything.
    Set the model_strength to 1000 and you get an overbaked shit.

    On SDXL the clip_strength was doing something, because you trained the text_encoder too.

    Yes actually i know what you mean with the trigger words.
    You train the text_encoder frozened, so no new words can be learned.
    (Thats how i understand it.)

    So if you say as example you try to train:
    "A woman getting fucked in her ass" but you try to train it as "A woman getting 23fgd" it wont work.
    In SDXL 23fgd would be the "getting fucked in her ass" because the clip would learn it.

    100% in Qwen and Flux as example it wont happen.
    Never ever.
    I tried it 100 times in the last times.
    And for models like flux, etc. you have to find all time the working trigger words.

    Have you ever tried to just write your "triggers" ?
    Without combination of "woman, naked" or so ?
    I would say they wont do a big different when you write them alone.

    LDWorksDVSep 1, 2025

    I will try soon to train the clip as standalone on the same dataset.

    Have you applied "Differential Output Preservation"to the training ?
    Somehow this should be a way to train trigger words again in ai-toolkit.

    jorkingtoncityshallwe
    Author
    Sep 1, 2025

    @LDWorksDV You know what you might be right, I will do some testing, if ai-toolkit's text encoder training doesn't work then a custom script would probably be best, and shouldn't be that hard to do (although having a pretty GUI is always nice haha). As for my sdxl based stuff, I used onetrainer, which I personally like quite a bit and seems to at the very least have working text encoder training :D.

    Honestly though I don't think that I am gonna bother with qwen again cuz that thing takes insane hardware to train.

    LDWorksDVSep 1, 2025

    @jorkingtoncityshallwe Yeah know what you mean.
    I mean on runpod a a100 is using 37gb vram with default settings.
    I guess with 40gb you can train there.
    L 40 = 0.93$ per hr.
    Setting everything up tooks 10min.
    (I train like this)

    Yes not cheap, but its "okei" xD

    So yeah, i will do it with a custom script made by GPT soon.
    Will try it.

    This would theoreitcly mean that a nudity LoRA like Lustify could be possible with Qwen-Image (With enough training)

    Like:
    2000 image finetune Qwen-Image
    2000 image finetune Qwen-VL-2.5

    That the text encoder even lears what trigger are.
    Would be amazing.

    jorkingtoncityshallwe
    Author
    Sep 1, 2025

    @LDWorksDV yeah I just tested it with the same prompt/seed and different clip strenght, the resulting image was exactly the same. So you're right ai-toolkit's text encoder training doesn't work properly. Honestly kind of impressive that this lora kind of works considering that the text encoder is essentailly clueless about all the nsfw shit lol.

    I have not used "Differential Output Preservation" btw, but in theory that should be just a way to preserve existing concepts when introducing new ones (so basically prevents overfitting and "forgetting" of concepts).

    LDWorksDVSep 1, 2025

    @jorkingtoncityshallwe Yeah actually this problems were also there on Flux already and nobody was beliving me haha

    So custom_trigger were never working.
    The model learns "somehow" what it sees and you have to find the right trigger.

    Yeah youre right.
    Then this doesnt makes sense.
    I will try to train the clip soon.
    Would be huge.

    Would be interesting if its possible later to say to merge the Qwen-VL-2.5 LoRA with the Qwen-Image Lora to get the text_encoder layers into the image lora so we can use it together in one "LoRA_loader" again.

    LDWorksDVSep 1, 2025· 1 reaction

    I thoug about what i said and i guess you have to do it like:
    1. Train a LoRA on QWEN-VL-2.5 / or / t5xxl
    2. Merge this into the text_encoder
    3. Trian a LoRA on the Unet with the new merged text_encoder. (Now you can use the known new Trigger words.)
    4. Use in the end the new text_encoder in Comfy + the Unet model with LoRA.

    This would made the most sense for me.

    jorkingtoncityshallwe
    Author
    Sep 6, 2025

    @LDWorksDV so just a little update if you're curious, I managed to train the text encoder on the same dataset as the lora, the results were the following:
    - in some cases completely mangled/bad looking results were completely fixed with the nsfw text encoder (fairly rare)
    - in some other cases it would make the output a bit worse (fairly rare as well)
    - the overall nsfw "understanding" and prompt adhesion seemed to be slightly improved

    I do want to note that I tested the text encoder on it's own for captioning my dataset and it was fairly accurate so the training its self was successful. BUT I do think that all the effort was not worth it, since the improvements are overall very minor, only in some very rare cases you get a big improvement. I might try training the text encoder on a bigger dataset at some point in the future (since this lora's dataset is not big enough for fully finetuning a text encoder I think :D), but for now I am not gonna bother with qwen and will continue working on my YARI model.

    LDWorksDVSep 6, 2025

    @jorkingtoncityshallwe Interesting to know !
    Want to share me the training script ?
    I would run a bigger training on a h100 or b200 if you want.

    jorkingtoncityshallwe
    Author
    Sep 6, 2025

    @LDWorksDV you can just use LLaMA-Factory to train the text encoder, just make sure to select the correct model, and make a dataset that's compatible with the required format (everything is explained in the readme of the github repo)

    LDWorksDVSep 7, 2025

    @jorkingtoncityshallwe Ez (y)

    clevnumbSep 1, 2025
    CivitAI

    Is there a way to get Qwen to generate deepthroat blowjobs and not just standard ones? I can seem to get around this limitation with prompting alone.

    jorkingtoncityshallwe
    Author
    Sep 6, 2025· 2 reactions

    sure, train your own deepthroat lora lmao

    clevnumbOct 4, 2025

    @jorkingtoncityshallwe I meant are there any fairly reliable existing methods? I don't want to train it myself, or I would do that.

    jorkingtoncityshallwe
    Author
    Oct 4, 2025

    @clevnumb A diffusion model can only generate what it has already been trained on (and also kind of mix and match the concepts it has learned). If you want it to reliably generate some specific concept like a deepthroat then you have to train it to do that.

    clevnumbOct 4, 2025

    @jorkingtoncityshallwe I'm genuinely surprised nobody has yet. There are DT Lora for every other model on here I believe. I'll just wait, thanks.

    nernzeng224Sep 3, 2025· 9 reactions
    CivitAI

    Does the qwen edit model work?

    BaumamnOct 2, 2025· 3 reactions

    Yes it does.

    Spock_on_SlashSep 5, 2025
    CivitAI

    Thanks for the Lora!
    I wanted to try it out (of course!) and it crashes Draw Things every time.
    If you like I'll send the crash report.
    Cheers!

    jorkingtoncityshallwe
    Author
    Sep 5, 2025

    there is no need to send me the crash report as there is nothing for me to do, it's up to the developers of draw things to support qwen loras, in comfyui it works just fine.

    Spock_on_SlashSep 6, 2025

    Indeed, I found out that it crashes with every Qwen lora I use. Now why did I try your lora first? Question for my psychologist :-)
    Cheers!

    a354286237512Oct 8, 2025· 1 reaction
    CivitAI

    After using this Lora, the human body is somehow severely broken. The face loses its consistency, and there are extra arms and legs. Sometimes it even becomes a mass of the human body.

    youprobablyknowOct 15, 2025

    In my opinion, this LORA doesn't seem suitable for generating images with “non-explicit prompts.” While it generates well enough with “explicit prompts” even without trigger words, it often generates incorrectly when used with non-explicit prompts.

    deditz111802Oct 16, 2025· 9 reactions
    CivitAI

    Output quality is .....

    jorkingtoncityshallwe
    Author
    Oct 16, 2025· 17 reactions

    Oh cry me a river dude, make a better one yourself or use something better then. I simply don't have good enough hardware to keep working on Qwen and smartasses like you make it much more unlikely that I will ever touch this model again

    deditz111802Oct 16, 2025· 4 reactions

    @jorkingtoncityshallwe r u a psychopath or something wt* u r mumbling dude. I just said output quality is bad.. and it's just a simple feedback!!!

    jorkingtoncityshallwe
    Author
    Oct 16, 2025· 10 reactions

    @deditz111802 first time I see anyone censor "wtf" to "wt*" lmao, look dude if you wanted to actually give constructive criticism you wouldn't have worded it in such a way, "most of my outputs come out pixelated" is constructive criticism, "Output quality is ....." just comes off as passive agressive bs

    deditz111802Oct 16, 2025· 2 reactions

    @jorkingtoncityshallwe  i see,, i really apologise for it if i put this in a wrong way,, i just simply write it down didn't think about it much. And i got really angry seeing ur reply by thinking who the f is this guy replying out of nowhere, i just noticed that the this is ur model. Sorry dude my bad

    amazingbeautyNov 13, 2025· 3 reactions

    ..is shit.

    jorkingtoncityshallwe
    Author
    Nov 13, 2025· 8 reactions

    @amazingbeauty hey man at least I contribute to the community, this is def not my best work and I admit it.

    Perhaps you can start talking shit once you make something yourself, cuz currently you have made 0 models and you've made 1 post in which you sound like a moron, while begging people to make a specific lora for you lmao.

    crappypornthrowawayNov 15, 2025· 6 reactions

    @jorkingtoncityshallwe keep it up man don't take shit from the scrubs lmao

    deditz111802Dec 5, 2025

    @jorkingtoncityshallwe this is your reply when someone said directly 'shit' and when i just simply comment that the output quality is... (A normal feedback comment Ur rage gone suddenly up and give a rude reply wtf

    jorkingtoncityshallwe
    Author
    Dec 5, 2025· 3 reactions

    @deditz111802 "a normal feedback comment" you delusional passive agressive rat, "Output quality is ....." is as passive agressive as it gets, you legit have no clue what constructive criticism is.
    Also idk why you're so bothered by how I answered to him, it's not like I was particularly nice to him either, I literally told him that he's a moron.
    Anyway feedback noted I should start being more mean to people so I can make you happy

    axel980Feb 4, 2026

    XD

    amazingbeautyApr 20, 2026

    @deditz111802 again the quality is shit sure and not realistic , people train models on ai shit ...that mentality that keep the shit more shitier.

    bullshizOct 19, 2025· 3 reactions
    CivitAI

    Recommended settings? Cfg, steps, sampler, etc?

    jorkingtoncityshallwe
    Author
    Oct 19, 2025

    The metadata and the workflow is attached to all the preview photos so you can see what I have used, generally 2.5 CFG, 20+ steps if you have patience, Euler/Simple work fine and 3.1 shift.

    osiasytacc222Feb 4, 2026

    @jorkingtoncityshallwe can I use this with lightning loras?

    xishui8873Oct 28, 2025
    CivitAI

    求告知这是什么功能

    Peter2053510Nov 7, 2025· 27 reactions
    CivitAI

    How can we prevent the female vagina from appearing on the male testicles?

    demeskeelos1956Apr 26, 2026

    By not voting Democrat.

    jimbotkDec 17, 2025
    CivitAI

    suggested prompts for p3n1s generation? everything I enter just generates a massive vag hanging off the dude...

    jorkingtoncityshallwe
    Author
    Dec 18, 2025

    Honestly I haven't bothered with qwen and this lora in general for a long time, there are some pictures generated by other people in the gallery with penises so try to copy their prompts I guess. Or try combining this with a penis specific lora.

    Phac123Jan 14, 2026
    CivitAI

    gives me very weird artefacts and bad overall results on Qwen2512 Text2Image. Why doesnt it work for me :(

    jorkingtoncityshallwe
    Author
    Jan 15, 2026

    this was trained on the first version of qwen, lightning loras/versions can have a bad impact also if you use some super low quant that could have an impact too. All gens that you see in the preview images are with fp8 qwen (the first version released).
    Even when it works though it's not that great as you can see from the preview images, if all you care about is nudity I'd recommend SDXL based models.

    banlg22793Jan 29, 2026· 1 reaction
    CivitAI

    hope for zimage vision

    Details

    Downloads
    36,723
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/12/2025
    Updated
    5/14/2026
    Deleted
    -
    Trigger Words:
    cum_on_face
    nsfw
    blowjob
    cowgirlout
    creamp1e
    penis
    l1ck
    missionary
    nipples
    reversecowgirlpov
    vagina

    Files

    qwen_MCNL_v1.0.safetensors

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.