CivArchive
    JibMixWan T2I effective workflow - v1.2
    Preview 99866058
    Preview 100645317
    Preview 100656490
    Preview 100658842

    This is my personal workflow for the JibMixWan 2.2 model used as Text to Image model.

    Model: https://civarchive.com/models/1813931/jib-mix-wan

    I dislike cluttered workflows and tested quite some time to find the settings that produce the best image quality for my use cases.

    In the workflow I use a gguf variant of JibMixWan - you can swap this out for the fp8 or fp16 model which is available to download (using a 'load checkpoint' node instead of the gguf loader). A gguf version is not available for download, but it is simple to create it yourself from the fp16 using the tool and manual in city96s GGUF node package: https://github.com/city96/ComfyUI-GGUF/tree/main/tools

    (creating ggufs yourself is a simple skill worth picking up).

    Suggestions and links for loras and upscale models are in the notes in the workflow. All missing nodes should be easily installable via the comfyui manager.

    Edit: if the Ultimate SD Upscale Node produces images with visible seams or different coloured eyes on a single subject (depends very much on the image) you can reduce the denoise from 0.28 to a safer 0.23 or (with tricky images) to 0.15

    Description

    You can now easily adjust the steps for all samplers with one node, additional explanations added, USDU denoise set to 0.23 which is a more stable option for different use cases

    FAQ

    Comments (8)

    BFliPSep 21, 2025
    CivitAI

    Definitely working and generating nice results, however I do find it's painfully slow compared to other text 2 image methods. I've used the checkpoint file and the lightx2 lora, 4070rtx on sage attention

    Greywolf666
    Author
    Sep 22, 2025

    are you using the fp16 model? Because that is too big to run fully in your vram on a 4070. Q8 or fp8 should fit (with text encoder on cpu/ram)

    GRIJAYSep 22, 2025
    CivitAI

    omg why is it extremely slow even on 5090?

    Greywolf666
    Author
    Sep 22, 2025

    The upscale part takes time with USDU (that's normal with that node), the rest shouldn't be slow - unless for some reason part of the model is not running on vram, but spills into cpu/ram or shared. I run the text encoder on the cpu (should be set that way in the workflow) so the WAN model can run fully in vram. What model are you using? I use Q8, to make sure nothing spills out of the vram. Don't know if the fp16 might spill over.

    Greywolf666
    Author
    Sep 22, 2025

    On a 5090, at 2 MP resolution, with sageattention, the first two sampler steps should take ~40 seconds each, the Ultimate SD Upscale should be ~3 minutes 40 seconds

    EschelonOct 12, 2025

    4,7It per second 4070tisuper, 29it per second 6000pro. workf;ow need edit for good job.

    delta45424155Dec 15, 2025
    CivitAI

    do you use wan22 high or low lora?

    Greywolf666
    Author
    Dec 16, 2025

    low lora

    Workflows
    Wan Video 2.2 T2V-A14B

    Details

    Downloads
    514
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/12/2025
    Updated
    5/12/2026
    Deleted
    -

    Files

    jibmixwanT2IEffective_v12.zip

    Mirrors