CivArchive
    NSFW MASTER FLUX (LoRA merged with flux1-dev fp16) - v1.0
    NSFW
    Preview 26827312
    Preview 26827537
    Preview 26828035
    Preview 26828053

    This is a merged model of a base model flux1-dev fp16 and NSFW MASTER FLUX LoRA: https://civarchive.com/models/667086?modelVersionId=746602

    To merge, I used a script from:
    https://github.com/kohya-ss/sd-scripts/

    Using --ratios 0.8 in the merging command.

    Description

    Merged base model flux1-dev.safetensors (fp16) with NSFW MASTER FLUX LoRA: https://civitai.com/models/667086?modelVersionId=746602

    FAQ

    Comments (41)

    psspsspsspssspssAug 30, 2024· 10 reactions
    CivitAI

    Any chance for a Q8 gguf version to save on VRAM?

    psspsspsspssspssOct 14, 2024

    @tedbiv fp8 is way less precise than a Q8 gguf

    tedbivOct 14, 2024

    @psspsspsspssspss script i used only saves fp32, fp16, bf16, fp8

    tedbivOct 14, 2024

    @psspsspsspssspss let me look into it...

    GodAlMightyAug 31, 2024
    CivitAI

    Release Unet version. We already have the other files and not so much space on our hard disks.

    sevenof9247Sep 1, 2024· 1 reaction
    CivitAI

    2 images for 22 GB OMG

    brianclark079145Sep 2, 2024
    CivitAI

    Kudos for the attempt. Hopefully there will be more traction to your efforts in the future. While it is applaudable - and somewhat usable using the SD 1x + keyword prompting approach, flux is standardized to using a normal language approach and honestly - by doing so with this model you get the same monstrosities of days past. Honestly , I would rather generate using sd,sdxl, or pony because I get far superior results as opposed to trying to trick this model into believing its flux, but operate like a standard sd or sdxl model. The potential to create a uncensored flux model is certainly there though, if anything you've proven that much.

    tedbivSep 2, 2024· 2 reactions
    CivitAI

    my poor aching computer... :)

    amd5600g, 32GB dram, rtx3060 12GBvram. large model still needs ae, clip_i and t5xxl. took 'a long time' to initially load.1st image 768x1344 20 steps took 3:45. 2nd image took 2:35mins. nice realistic nipples for flux model. pretty female faces. i'll post some images... so far so good

    update - my gpu is running about 10c cooler while running this model?

    Are you using a1111 maybe? I had crazy load times with it and once I switched to FORGEui it got a lot better. Almost 1/4 of the time to load models. I have a 4060ti with 16gb of vram and was having almost 2 minute load times.

    tedbivSep 8, 2024

    @Triplebenthusiast no, i'm using forge. it was the initial model switch/load took a couple of minutes. after the first image, renders were faster. i really like this model and the images it makes.

    tedbivSep 8, 2024

    the problem is the model size scared everyone off... they need to download it and try it.

    tedbivSep 9, 2024

    update running today, 768x1344 20 steps takes 1:39 min.

    tedbivSep 2, 2024
    CivitAI

    must admit... one of the better nsfw flux models i've tried so far. definitely the largest... :)

    GitarooManSep 3, 2024· 2 reactions
    CivitAI

    thank you so much for your effort, but us mere mortals cannot load that into vram

    tedbivSep 4, 2024

    when it's running it takes 24GB dram and 11.3 GB vram, in between images it uses 31GB dram 6GB vram

    VI6_D_DARK_KINGSep 5, 2024
    CivitAI

    Say I intend to start training Loras myself soon and I wanted to confirm. Is this a good one to use as a base model?

    n0valisSep 10, 2024
    CivitAI

    Can you explain how you merged? Which script did you use, merge_lora.py ?

    tedbivSep 10, 2024

    i tried merge_models.py... had to change the logs to prints. merging flux-dev-fp8 and nsfw-master lora. seemed to run up to 10 minutes into saving file then errored out with memory error. i haven't tried merge lora? my cmd line was 'python merge_models.py --models flux_dev-fp8.safetensors NSFW_master-lora.safetensors --output nsfw-fp8.safetensors --unet_only'

    n0valisSep 10, 2024

    Ah ok. I just found out in the sd3 branch of kohya there is a flux_merge_models.py now maybe that works better. https://github.com/kohya-ss/sd-scripts/tree/sd3/networks

    tedbivSep 10, 2024

    @n0valis thanks, i might give that a try also.

    tedbivSep 10, 2024

    @n0valis i should also try the merge_lora.py. 

    tedbivSep 11, 2024

    it looks like i don't have enough computer to run them ...

    'cpu' / 'cpu': Uses >50GB of RAM, but works on any machine.

    'cuda' / 'cpu': Uses 24GB of VRAM, but requires 30GB of RAM.

    'cpu' / 'cuda': Uses 4GB of VRAM, but requires 50GB of RAM, faster than 'cpu' / 'cpu' or 'cuda' / 'cpu'.

    'cuda' / 'cuda': Uses 30GB of VRAM, but requires 30GB of RAM, faster than 'cpu' / 'cpu' or 'cuda' / 'cpu'.

    i have rtx 3060 w 12GB of vram and only 32GB of dram. must be time to upgrade :(

    A_C_T_soonrSep 19, 2024

    @tedbiv The Kohya merge script still works really well for merging Flux Loras and especially for merging them into and out of model checkpoints. And those operations would barely draw any resources for anyone who could run this model in the first place. The main headache in this case is having to deal with the weirdly dissimilar and incompatible weight formulations between Loras trained via ai-toolkit/diffusers vs. Kohya-sd-scripts Loras. However, there is a nifty enough conversion script between the two formats in the same folder. Sometimes the conversion excludes some training (mainly text encoder-oriented?), but that actually seems to improve certain Loras, potentially neutralizing some of the context-warping side effects. In any case, it's really not clear to me how and why merging full Unet checkpoints would be whatsoever superior for Flux over strategically modal mixing in/out of training on a LoRa level. When a Lora can amend the transformer attention and feed-forward layers and just about every other component of it, it might as well be a checkpoint.

    tedbivSep 25, 2024

    @A_C_T_soonr ahhh... that would explain why the merge script works on some loras and bails with 'no blocks to modify' on others. i was able to merge flux-dev-fp16 with another lora and save it as fp8 and it's only ~11GB. i'll check out the conversion script. i really like the nsfw content of this model. if i could recreate it as an fp8 more people would use it...? thanks for the help :)

    tedbivSep 25, 2024

    @A_C_T_soonr yay! that allowed the merge script to run... now i'll see if i created something useful :)

    tedbivSep 25, 2024

    @A_C_T_soonr that worked... thx. testing the image now. 

    jeriffSep 14, 2024
    CivitAI

    Is this all-in-one model or just difusion mode without text encoders?

    tedbivOct 11, 2024

    it needs vae and text encoders

    Ext73Sep 17, 2024
    CivitAI

    for such fp16 models you need at least RTX 4090 with 24 GB VRAM ... that's obvious > and my RTX 4090 @ 500 Watt TGP @ 2950 MHz core and 24000 MHz VRAM has it ... it can barely handle FLUX.1-dev-fp16 ... but this quality is in 2048x1536 ;)

    tedbivSep 26, 2024

    i run it on rtx 3060 w/12GB vram. takes about 1:15 min for 896x1152 image

    kmdcompSep 24, 2024· 4 reactions
    CivitAI

    Amazing that this is where we are at already after not even 2 full months since initial release. Horny, um, finds a way I guess.

    GitarooManSep 30, 2024· 8 reactions
    CivitAI

    Please consider naming this something else, as it has the exact same name as Shopon_Skp's NSFW Master model. NSFW Defozo Edition or something

    Shopon_SkpOct 4, 2024· 28 reactions
    CivitAI

    Bro, you merged my LoRA model even though I didn’t give permission to merge it! And you didn’t just merge the LoRA, but also used my name as well. Please do one thing: either delete the model or change the name

    Imc00lOct 4, 2024

    Hating is wild

    jap39uniOct 6, 2024

    on the shoulders of giants

    tedbivOct 11, 2024
    abozaretfilo91297Oct 11, 2024
    CivitAI

    thanks for the model, but dang my poor trusty 3090 is suffering and straggling to allocated memory for it

    irreverend512Aug 24, 2025

    You can convert it to GGUF and quantize it yourself I believe

    s1710774108472May 17, 2025
    CivitAI

    I would like to use this safe tensors on diffuser pipeline instead of ""black-forest-labs/FLUX.1-dev""

    does have any method for connecting this on the pipe ... ??
    """
    pipe = FluxPipeline.from_pretrained(

    "black-forest-labs/FLUX.1-dev",

    vae=vae,

    text_encoder=text_encoder,

    tokenizer=tokenizer,

    text_encoder_2=text_encoder_2,

    tokenizer_2=tokenizer_2,

    torch_dtype=dtype,

    scheduler=scheduler,

    cache_dir=os.environ["HF_HOME"]

    )

    """


    MrSmith2025Jul 1, 2025
    CivitAI

    Would be great to get an fp8 version of that.

    Checkpoint
    Flux.1 D

    Details

    Downloads
    10,059
    Platform
    CivitAI
    Platform Status
    Available
    Created
    8/30/2024
    Updated
    5/13/2026
    Deleted
    -

    Available On (2 platforms)

    Same model published on other platforms. May have additional downloads or version variants.