CivArchive
    JBOOGX & THE MACHINE LEARNER'S ANIMATELCM SUBJECT & BACKGROUND ISOLATION via INVERTMASK VID2VID + HIGHRESFIX - v1.0
    Preview 7952659


    This is an evolution of my AnimateLCM workflow. Cut down and put together for ease of use.

    This workflow should at MOST require 12-14GB of VRAM making it a lot more approachable for folks with smaller GPUs.

    This workflow utilizes and requires an Alpha Mask that matches your video input to work properly. It will take the mask, resize is (shoutout to @Kijai for helping me get passed the initial CUDA errors), and invert it.

    I recommend using MACHINEDELUSION'S PhotonLCM model with this workflow

    How you go about getting that AlphaMask is entirely up to you. I personally like rotoscoping manually in Adobe After Effects for precision.

    The first two IPAdapter images will be for your subject (the white part of your alpha mask).

    The second two IPAdapter images will be for your background (the mask is being automatically inverted to reverse this).

    Running the HighResFix at Bilinear will keep you from getting a CUDA on longer frame videos. At 1.5 you will get a clean 768x1344 output after the HighResFix is finished.

    Appreciate everyone and all the support I receive.

    Special shoutout to @cerspense @Purz @jeru @Fill @MidJourney.Man @PatchesFlows for allowing me to babble about my AnimateDiff wants and desires and helping me navigate how to get there when I hit a wall and hit it often.

    Find me on IG @jboogx.creative

    Description

    This version is for AnimateLCM

    FAQ

    Comments (14)

    huwhitememesMar 14, 2024· 1 reaction
    CivitAI

    I'm working with a 12GB 4070 for a couple more weeks till the big card comes so, these lower VRAM requirement workflows have been essential. Thank You!

    jboogx_creative
    Author
    Mar 16, 2024· 1 reaction

    Hope you can make some cool stuff!

    huwhitememesMar 17, 2024

    @jboogx_creative Appreciate it, Brother.

    CatzMar 15, 2024· 6 reactions
    CivitAI

    ReActor FastFaceSwap node will be a pain.

    You need to follow the README.md instructions and then run install.bat in

    ComfyUI\custom_nodes\comfyui-reactor-node

    You'll need to install 2 models and place them in a specific folder.

    If you have errors, try to check the following:

    - Install Visual Studio

    - C++ extension

    - Microsoft Visual C++ compiler

    You might receive insightface errors, which this video helped getting the Wheels setup with the right python version. I had python 3.10 while blindly trying to install the v3.11 version 🤦‍♂️

    jboogx_creative
    Author
    Mar 16, 2024· 1 reaction

    Thanks for this! I promise I didn't create the nonsense that is ReActor XD

    pretsbrMay 17, 2024· 1 reaction

    hey thanks for the info!!! i will try right now to fix it, it`s missing ReActorFaceSwap and

    Text box Nodes

    CatzMay 17, 2024

    @pretsbr Text box nodes are just regular text box! Delete them and add new ones. But jboogx mentioned that he doesn't even use them anymore, he just writes directly into the prompt travel at 0.

    sharellaMar 23, 2024
    CivitAI

    IPAdapater nodes not working anymore, do you know how to fix it, i tried to update via comfyui manager with and same issues

    jboogx_creative
    Author
    Mar 25, 2024· 1 reaction

    The version has been updated. Download the latest version the workflow

    lior007Mar 23, 2024
    CivitAI

    IHAVE ERROR. WHAT TO DO?
    Error occurred when executing Efficient Loader: 'NoneType' object has no attribute 'lower' File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-comfy-nodes\deforum_nodes\exec_hijack.py", line 55, in map_node_over_list return orig_exec(obj, input_data_all, func, allow_interrupt) File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 151, in efficientloader model, clip = load_lora(lora_params, ckpt_name, my_unique_id, cache=lora_cache, ckpt_cache=ckpt_cache, cache_overwrite=True) File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 372, in load_lora lora_model, lora_clip = recursive_load_lora(lora_params, ckpt, clip, id, ckpt_cache, cache_overwrite, folder_paths) File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\custom_nodes\efficiency-nodes-comfyui\tsc_utils.py", line 363, in recursive_load_lora lora_model, lora_clip = comfy.sd.load_lora_for_models(ckpt, clip, comfy.utils.load_torch_file(lora_path), strength_model, strength_clip) File "D:\GitHub\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):

    chentaopisces890Mar 26, 2024

    ME TOO

    sonidosenarmoniaMar 30, 2024· 1 reaction

    I solved something like that by removing a lora that was trying to load and I didn't have. Look at the node at the furthest above and left.

    petrenko76v891Apr 1, 2024
    So I also removed the lora, and the process went into writing this link to the lora.
    LuxMintMay 8, 2024

    HE DOESN'T EVEN TRYING TO HELP PPL WITH ERRORS!! Guys like him really got my nerves

    Workflows
    SD 1.5 LCM

    Details

    Downloads
    709
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/14/2024
    Updated
    5/14/2026
    Deleted
    -

    Files

    jboogxTHEMACHINELEARNERS_v10.zip

    Mirrors