GGUF & Any other optimized versions can be found on Hugging Face: https://huggingface.co/chapel/hyphoria_qwen_v1.0
You can find easy to download versions of my workflow here (includes a version with custom nodes and without): https://civarchive.com/articles/22862
What started as an experiment on applying the strategies I used with Hyphoria and Hyphoria Real on Illustrious, I ended up with something quite interesting and much more than the sum of its parts.
My goal originally was to see if I could mix together a model that could look realistic and have baked in NSFW capabilities using models available today. No one NSFW lora really provided everything and had tradeoffs and the amount of realism provided varied quite a bit between trained checkpoints and realism loras.
My merging process somehow made something that was more real than I expected or planned and so I refined that to improve its promptable capabilities as well as what NSFW it could support out of the box.
I merged Jib Mix Qwen with White Qwen and then too many loras on top. My merging process allows me to do this cleanly that you'd otherwise have broken output if you did it normally. It does come with tradeoffs in that not everything is always merged how you expect so it required extensive testing from myself and some community members to get a feel for something that worked well.
It was obvious I needed to do some fine tuning myself, training on specific objectives to help refine some areas that the model was annoying to use, specifically around nudity and nipples/genitals rendering through clothes. This can still happen but it is much less frequent and usually tied to something specifically you are prompting. A prompt guide is soon to follow.
Settings I use:
Base Generation
Sampler: res_3s
Scheduler: bong_tangent
Steps: varies (8-12)
CFG: always 1
Lightning 8-step strength: 0.5-0.55
Hires (Latent Upscale)
Sampler: res_2s
Scheduler: bong_tangent
Steps: varies (6-12)
CFG: always 1
Lightning 8-step strength: 0.4-0.5
I use the lightning v2 8 step lora, you can use 4 step or no lightning lora if you want but the values to use are highly depending on what you want.
All my image should have workflows embedded, I will be publishing a more refined version ASAP.
Description
Experimental merge with some focused training on top to reinforce and fix some issues with all the things merged into it.
FAQ
Comments (123)
That's model is fire!!!!!
i love you daddy, hope post on tensor too
any plans on gguf ?
I will be looking into it tomorrow.
Sorry for the delay, Q8 is uploaded (note I couldn't validate it 100% works as it OOM on my local machine but the Q6 from the same process works as expected so I assume the Q8 is fine).
I couldn't upload both here, I am putting them up on HF as well.
plz man add gguf
I'll be looking into it tomorrow.
Sorry for the delay, Q8 is uploaded (note I couldn't validate it 100% works as it OOM on my local machine but the Q6 from the same process works as expected so I assume the Q8 is fine).
I couldn't upload both here, I am putting them up on HF as well.
@ecaj love u
Haha, I'm here!
Using this model, without using any LoRa.can generate high-quality images from 2K to 4K resolution .It's amazing.
PRUNED is 19 GB 💀
I'll be putting up GGUF quantized and a Nunchaku option which is even smaller in general.
Sorry for the delay, Q8 is uploaded (note I couldn't validate it 100% works as it OOM on my local machine but the Q6 from the same process works as expected so I assume the Q8 is fine).
I couldn't upload both here, I am putting them up on HF as well.
@ecaj Nah it's fine it wasn't supposed to be a complaint mb 😅.
for a file this size, can you add a comparison of seed to seed with a default qwen version, to see what you have created? thanks in advance
Good idea, I planned to do a comparison of some sort just been busy.
Congratulations on this excellent work and model! It's very impressive! I strongly encourage you to continue. I've only encountered one small annoying detail at the moment: its tendency to add a signature despite my negative prompts ("text, watermark, signature, name, label, letters, words, logo, writing").
Hmm, if you are using cfg 1, the negative is ignored and not used.
You could look into PAG or other options to allow negatives to work (not 100% sure here need to explore more).
As far as the tendency what types of prompts are you running? Maybe I can identify what it triggering it, none of my training had signatures but there's a good deal of training with merged models that I have no insight into.
@ecaj Regarding the signature issue, my prompt is the culprit. (I encountered this problem while using it with other templates.) Regards
@DrakulValmont666 I have found that depending on the resolution you are using it may also have more of a tendency. Something I want to look into for future versions.
Amazing work, keep it up. You've improved JIB without slowing it down. For future training if there will be any, I'd recommend adding a bit more data to, deepthroat, flat chest, nipple shapes and sizes, more natural shaped breasts (like sagging) because it defaults hard to a round implant look.
I have already around 10k images covering just NSFW concepts and body shape/sizes/types along with ethnicity and variations on all fronts. Plan to caption things to a detail that you could describe very specific anatomy and ideally get it. Will take some time to finish collecting, captioning, and then training but excited to get there.
@ecaj cheers, thanks for making qwen actually usable finally. (Outside of aio 2509 edit)
@TheNecr0mancer If I have my way, I'll make the next one a no compromises t2i and edit model. Cause I want to just have one model that does all the things I want to do, has been my dream for a while.
Does it make on Qwen edit 2509? I wish to have it because it has more powerful on edition. Thanks!
Not currently, but I plan to explore that next.
Holy Shit! This looks amazing! Is there any chance to publish in Q8?
As soon as possible, likely tonight or tomorrow. Someone from the community helped me out, just haven't had time to process it yet.
@ecaj Thank you! Much appreciate your hard work.
Q8 is now live, sadly I couldn't verify it myself locally as my comfy setup goes OOM with it, but the Q6 I made worked as expected so I assume it works the same.
@ecaj Had already tested your workflow on my local setup (16GB VRAM + 64GB RAM).
The results are almost identical to BF16. Q8 is much more friendly to a 16GB VRAM environment — no OOM so far. Thanks again for your efforts. Much appreciated.
THATS a fantastic checkpoint!
This is a masterpiece. I want to buy you coffee!
One or more of the models and LoRAs used to create the fp8 checkpoint seem to be based on incorrectly scaled fp8 models, like ComfyUI's official "qwen_image_fp8_e4m3fn.safetensors", from https://huggingface.co/Comfy-Org/Qwen-Image_ComfyUI. Or you incorrectly scaled the fp8 version by just directly downcasting it without and rescaling. This makes your model incompatible with LoRas trained on correctly scaled models, causing really bad artifacts. https://github.com/ModelTC/Qwen-Image-Lightning?tab=readme-ov-file#-using-lightning-loras-with-fp8-models
I have tested this a few times now, again after you posted and I do not see the grid pattern with the fp8 model using the lightning lora. I use the v2 8 step lora, so maybe it shows on the 4 step but it doesn't on the 8 step. This is the one I am using: https://huggingface.co/lightx2v/Qwen-Image-Lightning/blob/main/Qwen-Image-Lightning-8steps-V2.0.safetensors
I'll be posting some comparison images but I'd be curious to see anything you've seen yourself.
@ecaj The issue is present when I use an int8 trained LoRA on your model. Its really bad on the fp8 version of your model, and better but still not great on the GGUF Q8 version.
@progamergov Well afaik that isn't necessarily an issue with my model, but the lora. I run bf16 weights with fp8 runtime quantizing, I only got the grid with certain loras. But I've not seen that issue with the lightning lora. What lora do you use?
@ecaj So I can't release the model I've been working on yet, but I found this Lora that seems to be trained on int8: https://civitai.com/models/1907905, which should in theory have the same issue as my model does with your checkpoint.
Int8/GGUF Q8 are basically the highest precision fp8 quants you can get, and thus tend to produce the best results.
@ecaj Based on my testing and training experiments, the issue can be resolved by continuing training on an int8 or the bf16 version of the model for a short period of time. The research team I linked though did some sort of distillation process: "These weights were generated by distilling the qwen_image_fp8_e4m3fn.safetensors model using bf16 guidance, thereby mitigating the artifact issue."
@progamergov Good to hear you were able to resolve it for yourself. Distilling is not the same as training on a model, so the effects will be different. But you'd have to not use lightning loras to make sure those aren't affecting your lora for instance.
@ecaj I wasn't able to resolve the issue of using my lora with your model. Just that my lora was originally trained on a lower nf4 precision version of the base Qwen model that caused the issues. I resolved it by moving training to the higher int8 precision. I wasn't using my Lora with the lightning lora, I just linked to it as an example of others who experienced the issue and resolved it.
@progamergov Well, you definitely shouldn't be training against nf4. Have you tried training against fp8 or even bf16? Any lora not trained on source weights is going to have trouble with other quantized versions I'd guess, at least without some specific tuning to fix issues related to being trained on a quite different latent space.
@ecaj Yes, I switched to an fp8 version that is tied with GGUF Q8 for the highest precision, int8. NF4 actually didn't have the problem of the underlying model's precision causing artifacts, or other loras causing artifact. But it did have a base level of artifacts present, that could be almost nonexistent based on the prompt.
@ecaj The model I was talking about has been released now: https://civitai.com/models/2209835/qwen-360-diffusion
Added Q8 GGUF here to civit, Q6 is also available on HF.
GGUF & Any other optimized versions can be found on Hugging Face: https://huggingface.co/chapel/hyphoria_qwen_v1.0
Well done .. any plans for nunchaku versions ?
@Useful_Ad_52233 I'm working on it but surprisingly not a lot of info on doing it so having to hack together the process.
@ecaj much appreciated
Hi, thanks for the great model! If you have time please add a Q5KM version? my machine running Q6 sometimes overflows Vram and the image creation process takes 4-5 times longer.
Noob here, is this possible to run with 8GB vram?
If you have enough system ram yeah, just will be slow. I'm working on a nunchaku version which handles lower vram better.
@ecaj Nunchaku version will be amazing. Is it even possible for qwen models yet?
@flo11ok874 Well there's official nunchaku versions, but nothing out there for doing nunchaku for custom models. I'm investigating.
I must have something out of date, I've tried loading your exact workflow from image and I get abominations.
I've tried BF16, pruned and GGUF and get nothing close to the quality you're producing, where as my outputs from JibMixGGUF are flawless.
What am I missing?
Hmm, are you using the settings in my workflow? I have a simpler one I plan to publish tonight that doesn't use any custom nodes. My process was inspired by what jibmix qwen was doing.
Same LoRA? Same Sampler and Scheduler? Same VAEs (the original one and the wan 2.1 2x upscale vae)?
@ecaj Yep, everything the same, was just trying to recreate your Viking image as a sanity check without changing anything, basically using the same WF with Jib too.
@GoatKingXK Are you using lightning lora? Jib has lightning lora merged in while mine technically has it merged in due to being started from jib, it requires using the lora to get the same results as I post. Unfortunately did not include the lora listing in my metadata due to how it was setup but you should be able to get similar with it set to 0.5 strength range.
What are the tips to generate different types of women? all i can try is asian or caucasian. Im not into africa stuff, and need some nice tips for real please!
You can use nationalities but the variety is not as much as I'd like. You could take a picture of someone and feed it to an LLM and ask it to describe that person for use in generating an image and to be detailed about facial anatomy or something.
This is an amazing model and workflow, holy crap.... thank you!
What is the purpose of the -.50 SNOFS 1.2 lora you have loaded? To help prevent NSFW from sneaking in?
I was testing using that to make it more SFW, it was only for one image though so any others it showed up it was just a remnant as I generally don't use loras often.
Can someone provide ideal settings for Draw Things for this model? I keep getting images that are not so sharp and detailed. Using a Macbook Pro Max M3 48GB RAM.
I have never used it before, is there information on what samplers/schedulers it has access to? Also does it have the concept of latent upscaling?
The Samplers available that work with Qwen models on Draw Things are:
DPM++ 2M Trailing
Euler A Trailing
DPM++ SDE Trailing
DDIM Trailing
UniPC Trailing
DPM++ 2M AYS
Euler A AYS
DPM++ SDE AYS
UniPC AYS
I've tried most of them and I still get the same results. Let me know what you recommend
@Renergy Euler and unipc probably are best but not sure what trailing is specifically and I haven't tried AYS. The big thing for detail and clarity is latent upscaling but unfortunately I don't have an apple device to test this myself.
Is there any prompting that helps keep the image SFW? Every time I try to generate an image of a woman with a low cut top, it exposes the nipples.
It can be hard, avoiding prompting breasts/nipples or things like cleavage can help. For instance if you just prompt "A woman with a low cut v neck shirt." it will do that and show cleavage but covered breasts. I want to fix this to make it easier to work with.
This level of fine tuning is an amazing quality that you can't help but admire deeply.
Sometimes, unwanted NSFW is created. However, even if the version develops in the future, it seems better not to force this part to make SFW. Many people are disgusted with the NSFW created by AI, but if you don't train the AI to train the NSFW, the model's understanding of the proportion or structure of the human body will decrease
Once again, thank you for your hard work and talent
Current version 1.0 gives woman nipples on ALL shirtless male bodies and also seems to be adding a bit of woman brests shape to males.
Also cannot get the quality presented in examples with same setting but different workflow. Will try harder.
I keep writing. It's a really great model, but there's one thing I'm disappointed about. This is a common problem with qwen-image, but the texture of the atmosphere is weak. The model is good at expressing the shape of objects, but the emotional expression of the production is lacking. I hope there will be a masterpiece like fluxmania in qwen-image soon. Of course, it's still so great right now
@Adel_AI
Thanks for taking a look at it. Qwen commonly requires you to prompt what you want, and can feel more basic by default. I'm able to get more expressive and interesting images by being much more specific, and I leverage an LLM to add that to my prompts since it can be annoying to manually go through it all.
That is all to say, I am wanting to give the next version some more character versus default qwen image, but remains to be seen how far I can push it.
What tool did you use to train this full finetune?
It's not a true fine tune, used AI Toolkit to train on, it's a huge model so focused training right now.
Can someone share the workflows,it seem doesn’t work in qwen offical workflow.thanks!
It should work with a standard Qwen workflows, but the quality I achieve requires more than the default workflow. Let me get my workflows published separately.
@ecaj pls!
@JSJZYWRE I will tonight. Busy weekend.
@ecaj thanks bro,you are awesome
@JSJZYWRE Sorry for the delay, posted here: https://civitai.com/articles/22862
@ecaj great job! I will test later.
Did you upload it to TA already? Link?
@Ray3780 Thanks for sharing the link.
Great model, thanks for sharing it :-)
Are you planning on doing a less horny version?
Yes, I'm actively working on captioning a dataset that will both improve the NSFW features (ideally so it is more consistent) but at the same time making them entirely opt in.
@ecaj thats fantastic man! Glad to hear it 😀
any plan to release a nunchaku version or lora?
I'm working on nunchaku version, sadly there's no easy way to do it.
For easier sharing, I quickly uploaded two versions of my workflow (one with custom nodes and one without) here: https://civitai.com/articles/22862
I want to have a proper model page for them soon but couldn't get the model creation page to work due to an error from Civit's backend.
Thank you for creating such a fantastic model. I do have one suggestion for any additional version you might put together. Dicks. Bigger, better shaped, better positioned dicks. Now, I'm a red-blooded American male who loves himself a beautiful naked chick—and you've done a great job of putting together a great variety of these. But sometimes, I like to see these women share the scene with a big ol' donger. And I've really been struggling to get one for some basic positions like missionary style sex and cum shots across a tummy. Does this make me a disgusting pervert? Probably. But I am also an artist. And this artist needs access to all his tools. Please sir, give me a better donger tool. Thank you.
MY MAN! Fellow Red Blooded American, Here! I Also like the occasional "Donger" slaying some Ladies. Now don't you ever think you're some Pervert ever again! Fat Hogs Tearing Apart Beautiful Woman is truly as old as time and Reserved Only for The Finest Appreciator's. Take Care, My Friend - Oregon, USA
Yes it does. That's a waste of semen. Ejaculate inside.
I'm literally crying, well said mate lol
Is 16 Gb of VRAM (4070 Ti Super) enough for his model?
You can run it but depends on system ram how fast or how well.
@ecaj DDR5 32 6000
@FinnHuman So 32gb ram. The fp8 should work, maybe q6 version.
@ecaj good, thank you
how can i use this on forge-neo?
Honestly I have no idea, I'd hope so. If you can use qwen-image or qwen-image-edit you should be able to use this one.
@ecaj going to try this now on neo. too bad the jib mix v5 doesn't work on stable diffusion. Not sure if the requirements are too high or not.
@ecaj unfortunately it doesn't work on webforge ui neo. times out of connection right away. probably a stable diffusion version down the road might be good.
@Melodic_Possible_582589 Sorry to hear.
Does this support image-to-image formatting?
You mean editing like qwen-image-edit? No, but it does support image to image.
do you have quantized version? Q4? doesn't fit on 12gb xD
No, not sure how well it would perform. You should look into z-image-turbo, it is smaller and fast. I am planning a version of that but still working on it.
@ecaj Do you need compute resources? I'm able to provide that.
@jeremiahomolewa56846 No, making a Q4 isn't hard, just quality is going to suffer compared to others. If you have the CPU ram you could try fp8 and let it block swap @ponystalk69990
Great work! Do you plan to release something like this for Z image?
I'm working on it, well a lot of things but definitely something I'm working on. Not wanting to rush out something that isn't to my standards plus if the base model ever gets released I'd prefer to have that to work with as well.
There's still no qwen model that can surpass this. I'm sad that it won't come out in the future
I haven't given up on it yet, just working on z-image currently and qwen image 2 is supposedly dropping soon so interested in seeing what comes before I do more with this one.
do you think of updates the base to 2512?
hyphoria qwen is the best realism model out there, i guess i would really benefit from 2512.
I have a test merge I am going through right now. It is pretty good on its own but may want to do a little train on top to make it a solid v2.
@ecaj very cool, thanks! qwen2512 is very promising in my eyes wrt. realism.
@ecaj thanks man
Any news on that Hyphoria Z image 😅
Actively working on it.
Now is April in 2026 and your V1 still beats all the other models (not just Qwen Image). Hoping can get the 2512 version. Thank you!















