Description
This workflow is similar to my other SDXL workflows, except that it incorporates IPAdapter to allow you to style the generation with an image. That image could be a face that you want the generation to be based on or it could be an artistic image with a style that you want the generation to replicate.
This is a fun tool that has a lot of uses!
FAQ
Comments (38)
What is the difference between Beta and Beta57 sheduler? It is just some preset value inside the scheduler? Am I fine using regular Beta, or should I try finding the 57 version?
Ok, found the answer myself: https://github.com/ClownsharkBatwing/RES4LYF
I don't know the technical details about the difference between beta and beta57. Sometimes one or the other will give better results. Also karras and exponential work well. It's just a matter of trial and error.
It keeps telling me "Some Nodes Are Missing" and then points to UltralyticsDetectorProvider, even after I've clicked install all missing nodes and restarted ComfyUI.
I've been having this problem recently in ComfyUI and couldn't get nodes to install after waiting a while, restarting and separately updating ComfyUI and Python dependencies. I think I know a fix that might work for you. Close the messages stating that you are missing nodes. Manually go to ComfyUI manager and click Install Missing Nodes, and then click install on the listed nodes, it will ask which version if applicable (I always choose latest) and you should be good to go.
@HaokiChan Follow the recommendation from @allofdarkness. If that doesn't work then you may need to manually install or update Ultralytics using git. I've also had this happen sometimes where a node package just refused to install or update.
How come the Ipadapter workflow works with no loaded Clip Vision in the workflow? It's embedded in the MOP checkpoint?
Clip Vision is definitely not merged in with the MoP checkpoint. I'm not 100% sure how it works but the IPAdapter models probable handle the Clip Vision operations internally.
It seems to load some files directly from predetermined paths and filenames.
@shinizaya You should just update the node to load the files from the path on your local machine.
@GBRX Oh you mean the code of the node? It's all good and working great as is, I just wanted to confirm that the node does indeed load clip vision internally pretty much. Thanks a lot for your workflows and MOP btw I love em.
IPAdapter model not found. @shinizaya @GBRX is this the exact model placed at C:\comfyui\models\ipadapter\ip−adapter−plus_sd15.safetensors
im using still getting this error , get me a solution to resolve it
@myphotosjanaki595 That's the SD 1.5 version. You need the SDXL version. If you've got IPAdapter installed then it should automatically download what it needs.
This used to work for me but I haven't tried it for a while. Now it gives an error: FaceDetailer
'DifferentialDiffusion' object has no attribute 'apply' ...
Sorry, I have no idea what that issue could be. Maybe try updating Comfy if it's been a while?
@GBRX Chatgpt suggests it is either the FaceDetailer being incompatible with later versions of Comfyui (I update it and the nodes every day). Or something to do with comfyui-impact-subpack. I don't know enough to troubleshoot any further. Thanks for your response.
@dcham2310 Yeah, I did a little searching too but nothing definitive other than possibly a low-level version mismatch between python packages
If its the face detailer. does this still happen when you select a different face detailer?
@Pandaofd00m I looked at the nodes and found another one: Face Detailer (AV). Tried that and got the same error. There's also a Face DetailerPipe (and the AV version) but that has different inputs so I have no idea how to connect it.
OK, I got there in the end. Boils down to my constant confusion between Custom Nodes and the (sub) nodes available when I search inside the workflow screen by double-clicking in free space (hope that makes sense). Anyhow, I double-clicked and searched for a Differential Diffusion node and found it. Placed that between the Refiner model workflow and the Face Detailer (model input) and now it all works!
@dcham2310 ah this one - yep been there too. Sry for not remembering it
@dcham2310 Hi, thanks for the tip, what strength do you keep as a base? as 1 tends to soften everything
@zthrx If you mean, what strength for the DD node, I ran it with 1 and it looked ok but thought it could be better. 0.6 seems to be working for me but I have not tried up and down the scale.
CheckpointLoaderSimple: - Value not in list: ckpt_name: 'gonzalomoXLFluxPony_v50FluXLDMD.safetensors' not in ['cyberrealisticFlux_v21.safetensors', 'cyberrealisticPony_v130.safetensors', 'mopMixtureOfPerverts_v40.safetensors', 'realDream_flux1V2.safetensors'] CheckpointLoaderSimple: - Value not in list: ckpt_name: 'gonzalomoXLFluxPony_v50FluXLDMD.safetensors' not in ['cyberrealisticFlux_v21.safetensors', 'cyberrealisticPony_v130.safetensors', 'mopMixtureOfPerverts_v40.safetensors', 'realDream_flux1V2.safetensors'] UltralyticsDetectorProvider: - Value not in list: model_name: 'bbox/Eyeful_v2-Paired.pt' not in ['bbox/face_yolov8m.pt', 'bbox/hand_yolov8s.pt', 'segm/person_yolov8m-seg.pt']
You can find a bunch of the bbox models here, https://huggingface.co/ashllay/YOLO_Models/tree/e07b01219ff1807e1885015f439d788b038f49bd/bbox
@GBRX Thank you. I don't know what that means but I will figure this stuff out eventually. I haven't found any tutorials yet for this kind of stuff.
@InappropriateSquid This channel has good ComfyUI tutorials, https://www.youtube.com/@pixaroma
@GBRX Thank you. I do watch Pixorama, he has a lot of great step-by-step tutorials. And I have gotten a few workflows running following his instructions. But he covers the how, but not the why. So, when I try to use Civitai checkpoints I usually run into problems, and I don't have the grounding to solve.
I am surprised Civitai doesn't have comfyui tutorials. I will try comfyui-wiki.
I got the following error: FaceDetailer
type object 'DifferentialDiffusion' has no attribute 'execute'
error caused by comfui update. add a differential diffuser node in between the model out/input
for the checkpoint refiner do i need MOP?
No, you can use any photorealistic checkpoint but you may need to change the ksampler settings depending on the checkpoint used.
Hi, man! Is there a workflow for doing I2V with this MoP? Downloaded the one here in Civit but I've only seen how to create image. Thank you!
Hi, I've got my I2V workflow for sale on Patreon, https://www.patreon.com/posts/wan-2-2-image-to-141336142?source=storefront, but the I2V templat in comfy is also very good.
@GBRX I'm familiar with bigasp, but is littleasp a different model I can find somewhere?
It's an experimental model I toyed with - https://civitai.com/models/1513492?modelVersionId=2032014






