Qwen Image-to-Image / Text-to-Image Workflow
This workflow can generate images from text prompts or transform existing images. It also supports face swapping, LoRA models for custom characters or styles, and random prompt batches to quickly explore variations. Together, these features make it easy to create consistent characters, try different looks, and experiment with new ideas.
I built this off of other workflows!
*Version 2.0 fixed the faceswap issues!*
*Version 1.2 now uses an abliterated text encoder.*
Meaning you can now easily do NSFW content. You only need to download the abliterated version. It's in the instructions within the workflow...
The newest version no longer uses the random prompt generator. If you want that functionality just download the version 1.0!
Description
Added nodes for the abliterated text encoder (this allows you to do nsfw generations)
FAQ
Comments (25)
Tried to use the workflow, but getting error "ValueError: Unexpected text model architecture type in GGUF file: 'clip'"
which text encoder are you using? Qwen2.5-VL-7B-Abliterated-Caption-it.Q8_0.gguf or qwen_2.5_vl_7b_fp8_scaled.safetensors?
@SnowShoes311 - Qwen2.5-VL-7B-Abliterated-Caption-it.Q8_0.gguf
@ebonydad Try using qwen_2.5_vl_7b_fp8_scaled.safetensors. the node to use it is underneath the gguf node.
@SnowShoes311 - With that one I get the following error: "CLIPLoaderGGUF
Mixing scaled FP8 with GGUF is not supported! Use regular CLIP loader or switch model(s) (E:\aiart\Stability Matrix\Packages\ComfyUI\models\clip\qwen_2.5_vl_7b_fp8_scaled.safetensors)"
@ebonydad Can you export the workflow as a png and send it to me?
@SnowShoes311 - https://pastecode.io/s/ciwv5a9q
@SnowShoes311 https://pastecode.io/s/ciwv5a9q
@ebonydad https://pastecode.io/s/4n01uxuh
@SnowShoes311 - So the safetensors work, but why doesn't the GGUF text encoder not work?
@ebonydad I'm not sure. Have you gone into Manager and installed all missing nodes? Is Comfyui up to date?
@SnowShoes311 - I've downloaded all nodes and models, and still an issue. Basically it states that this isn't Abliterated GGUFs aren't "clip" models.
@ebonydad There are three things I think it could be.
1. Did you get the Quant 8 model?
2. Did you put it in the /models/text_encoders folder?
3. The GGUF only works in the GGUF node. NOT the safetensors node.
If those things don't work you can send me the log in a dm and I can find the issue.
@ebonydad Do you have the latest version of the GGUF loader node? IIRC I had this problem too and it went away by updating. 85% sure, it's all a blur with all this new stuff and new workflows every week...
@gman_umscht I've deleted the GGUF node and reinstalled it and still have had issues.
@ebonydad which one do you have installed? I have the one from calcuis github (author = gguf) node at version 2.3.1, there's also one from city96 and maybe others
@gman_umscht - I have city96 GGUF installed. Just looked at the workflow nodes, and see that there is a different gguf. I am updating it now. Will let you know what happens.
@gman_umscht - I am running city96 version of GGUF. I installed the other one, but still no go.
@ebonydad If you're able to post your workflow here or somewhere else I can try to see if the same problem shows up with my Comfy instance
Love the workflow, but having insane problems with faceswap.
All the models are downloaded, but when I run the faceswap part I just get a generation with a black hole in the middle of the face.
EDIT: fixed, check below to answer.
Here is a png of my workflow to see if it an error on my part (most likely)
https://files.catbox.moe/3p7bbl.png
Nevermind, Apparently one of the models (insightface_128) was nested in a folder one too deep. Allowed me to gen the first time then gave me the error to fix it. All good now, thank you for all your hard work!
Should be fixed now in the version 2.0. I added new logic that should make everything work?
Absolutely T for Tremendous! Workflow of the YEAR!!! - You are an absolute fkn legend! Thanks amillion for this. I
honestly donn't know how you subject yourself to all the moronic! "I have an error" comments, but I for one appreciate that you do!
- But here is a solution for you, and all the commentors who can't seek out a solution as Google is blocked in their zip code.
In the description state any errors, copy and paste the error log into a LLM chat window, for best results a web search enabled one like ChatGPT or Perplexity.
The LLM will spit back a step by step guide on how to fix every error. I have yet to encoutner any error that has not been solved by an LLM
Thanks. I updated the logic. FaceSwap should work as expected now!






