Flux-Kontext Try-On Workflow !
version 2.0
* compartimentalized workflow
* supports flux-nunchaku
* manually offloads Flux and Phi models from VRAM to cpuHere is my clothing transfer workflow. All you need are two images:
an image of the subject you want to dress on
an image with the clothes you want to transfer
There's no need for prompting as it should work out of the box. To be honest It took me a LOT of trial and error to figure it out. It's a bit complex, but it does the job and it works most of the time.
Word extraction from images
I use both Florence2 to extract captions about the image and Phi to sort out the mess. FluxK wants a very specific prompt and that's how we do it, in a lazy way so it's pretty much automatic :)
Subject outlining
Using ClipSeg and stroking mask contours, it helps Flux Kontext... or maybe not ! but still I make a circle around my subject to be clear to FluxK who I want in the final image
2-step Method
It follows a two steps process
Dress up in a swimsuit: I haven't found a better way to remove sleeves and leggings and prevent them from showing on the end product otherwise.
Transfer clothing from any image to your subject
Consistency
It's at the same time quite impressive and sometimes disappointing. Sometimes it's perfect sometimes it botches the job and sometimes certain images will refuse to work for some reasons:
difference of scale and angle between images
you didn't provide an image with a great size and resolution (I left some info on the topic)
fluxk doesn't understand the clothing
the clip-l doesn't understand the name of some items
the most likely reason: it's for your own good (check the section below)
SFW
Flux Kontext will do everything in its power to keep it SFW, and the great part is that it's unlikely to produce offending results, even if input images aren't SFW to start with (check the examples for that). Some type of clothing will not work. Private parts (Crotch, nipples and the lower-exterior part of the cups) must be covered, however input images will be treated regardless. It tends to extend the fabric over these areas to still allow you to process the image.
Feedback is welcome!
(in Progress v3.0 - update 2025-09-25)
[x] improving the offload/recall nodes to detect/ignore unsupported models
[x] improving prompts for better fidelity and better transfer overall: Simpler is better Flux can get confused with too much info
[x] removing red outline step: replaced by a red box
[x] replacing Phi by another model: Gemma3:4b
[x] added an optimisation step on the target to prevent replacing face and body of the subject.
[~] developing caching nodes for a consistent workflow. Will be quite handy in many workflows.
[~] removing experimental fluff (compartimentalisation)
[~] cleaning up new workspace
Abandoned:
[.] using a fashion fine tune of Florence2: The only one avaialble is worse!Description
Lower VRAM requirements
Phi LLM model offloaded after use
CLIP unloaded after encoding the prompt
works on my 5070ti with 16GB VRAM + 32GB RAM
Compartimentalized workflow
using context nodes, acting as workflow chokes, requiring all inputs to be defined to move forward
using crystools' pipe to/from for the same effect
Nunchaku+turbo lora support confirmed
For this version I developped a new node called comfyui-model-offload, allowing to offload to CPU and recall models to CUDA. It's pretty useful and solves issues with models that can't be unloaded (like Phi in this workflow), also it's more flexible as it uses the "preferred device" and the "offload device" defined in comfy.
FAQ
Comments (5)
installing that PHi node broke my running Nunchaku because of some additional Triton thing you have to install .
Sorry about that.
Have you managed to fix it?
I think it tried to update the 'accelerate' module, which in turn cascaded.
Do you propose any alternative to Phi?
Also, when installing python modules manually i often run
python.exe -m pip install THEMODULE --dry-run
and it it wants tu upgrade something I dislike, I run this one to prevent downloading dependencies.
python.exe -m pip install THEMODULE --no-deps
Sometimes a simple packages want to upgrade my pytorch version (no thank you!)
Let me know your progress vanwaar495
From my side i'm trying to fix some inconsistencies for the offload/recall package I developped, although it's doing a pretty good job so far.
Missing Node Types
When loading the graph, the following node types were not found
MolmoModelLoader(In group node 'workflow>prompt')
workflow>prompt
MolmoGenerateText(In group node 'workflow>prompt')
workflow>contextual prompt generation
AILab_ColorInput(In group node 'workflow>mask clothes')
workflow>mask clothes
AILab_MaskExtractor(In group node 'workflow>mask clothes')
Pipe to/edit any [Crystools]
Pipe from any [Crystools]
NunchakuFluxDiTLoader
NunchakuFluxLoraLoader
Thanks for the feedback. That's telling me there's a lot of cleanup to do. Here is what i'm going to do:
- I'm going to publish the model without nunchaku (anyone can replace the loaders by nunchaku though). On top of that nunchaku manages offloading all by itself.
- I might remove the crystools "workflow gates" if I manage to get offloading/recall nodes working more consistently.
- The masking will probably disappear, I'm developping a new prompt that doesn't need it.
- Molmo was chosen before Florence2 and Phi. I'm surprised it's still there.
lnknou Thank you very much! I will wait and hope



