CivArchive
    ComfyUI Workflow for Segmented Style Transfers - v1.0
    Preview 16267448Preview 16267474

    If you're looking for a more efficient way to change outfits in ComfyUI, this workflow is worth exploring. It leverages the power of IPAdapter, Grounding Dino, and Segment Anything models to transfer styles and segment objects with precision.

    Workflow Overview

    The workflow consists of three main groups:

    1. Basic Workflow: Sets up the foundation for the entire process using an inpainting checkpoint and a good SDXL checkpoint.

    2. IPAdapter: Transfers styles from a reference image to the target image using the CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors and ip-adapter-plus_sdxl_vit-h.safetensors models.

    3. Segmentation: Utilizes the Grounding Dino model to segment specific objects within an image using a textual prompt.

    How to Use This Workflow

    This workflow is suitable for creating virtual try-on experiences, batch processing images, or experimenting with different styles and objects. To get started, simply set up the nodes as described above, and input your desired image and style reference. You can adjust the settings and parameters to achieve the desired outcome.

    Additional Resources

    If you're interested in learning more about this workflow, check out the video tutorial on Prompting Pixels.

    I hope this version provides a good balance of context and straightforwardness! Let me know if you need any further adjustments.

    Description

    Workflows
    SDXL 1.0

    Details

    Downloads
    229
    Platform
    CivitAI
    Platform Status
    Available
    Created
    6/18/2024
    Updated
    9/27/2025
    Deleted
    -

    Files

    comfyuiWorkflowFor_v10.zip

    Mirrors

    CivitAI (1 mirrors)