WAS Node Suite - ComfyUI - WAS#0263
ComfyUI is an advanced node based UI utilizing Stable Diffusion. It allows you to create customized workflows such as image post processing, or conversions.
Latest Version Download
A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more.
Share Workflows to the workflows wiki. Preferably embedded PNGs with workflows, but JSON is OK too. You can use this tool to add a workflow to a PNG file easily
Important Updates
[Updated 5/29/2023]
ASCIIis deprecated. The new preferred method of text node output isSTRING. This is a change fromASCIIso that it is more clear what data is being passed.The
was_suite_config.jsonwill automatically setuse_legacy_ascii_texttofalse.
Video Nodes - There are two new video nodes,
Write to VideoandCreate Video from Path. These are experimental nodes.
Current Nodes:
BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question.
Model will download automatically from default URL, but you can point the download to another location/caption model in
was_suite_configModels will be stored in
ComfyUI/models/blip/checkpoints/
SAM Model Loader: Load a SAM Segmentation model
SAM Parameters: Define your SAM parameters for segmentation of a image
SAM Parameters Combine: Combine SAM parameters
SAM Image Mask: SAM image masking
Image Bounds: Bounds a image
Inset Image Bounds: Inset a image bounds
Bounded Image Blend: Blend bounds image
Bounded Image Blend with Mask: Blend a bounds image by mask
Bounded Image Crop: Crop a bounds image
Bounded Image Crop with Mask: Crop a bounds image by mask
Cache Node: Cache Latnet, Tensor Batches (Image), and Conditioning to disk to use later.
CLIPTextEncode (NSP): Parse noodle soups from the NSP pantry, or parse wildcards from a directory containing A1111 style wildacrds.
Wildcards are in the style of
__filename__, which also includes subdirectories like__appearance/haircolour__(if you noodle_key is set to__)You can set a custom wildcards path in
was_suite_config.jsonfile with key:"wildcards_path": "E:\\python\\automatic\\webui3\\stable-diffusion-webui\\extensions\\sd-dynamic-prompts\\wildcards"If no path is set the wildcards dir is located at the root of WAS Node Suite as
/wildcards
Conditioning Input Switch: Switch between two conditioning inputs.
Constant Number
Create Grid Image: Create a image grid from images at a destination with customizable glob pattern. Optional border size and color.
Create Morph Image: Create a GIF/APNG animation from two images, fading between them.
Create Morph Image by Path: Create a GIF/APNG animation from a path to a directory containing images, with optional pattern.
Create Video from Path: Create video from images from a specified path.
CLIPSeg Masking: Mask a image with CLIPSeg and return a raw mask
CLIPSeg Masking Batch: Create a batch image (from image inputs) and batch mask with CLIPSeg
Dictionary to Console: Print a dictionary input to the console
Image Analyze
Black White Levels
RGB Levels
Depends on
matplotlib, will attempt to install on first run
Diffusers Hub Down-Loader: Download a diffusers model from the HuggingFace Hub and load it
Image Batch: Create one batch out of multiple batched tensors.
Image Blank: Create a blank image in any color
Image Blend by Mask: Blend two images by a mask
Image Blend: Blend two images by opacity
Image Blending Mode: Blend two images by various blending modes
Image Bloom Filter: Apply a high-pass based bloom filter
Image Canny Filter: Apply a canny filter to a image
Image Chromatic Aberration: Apply chromatic aberration lens effect to a image like in sci-fi films, movie theaters, and video games
Image Color Palette
Generate a color palette based on the input image.
Depends on
scikit-learn, will attempt to install on first run.
Supports color range of 8-256
Utilizes font in
./res/unless unavailable, then it will utilize internal better then nothing font.
Image Crop Face: Crop a face out of a image
Limitations:
Sometimes no faces are found in badly generated images, or faces at angles
Sometimes face crop is black, this is because the padding is too large and intersected with the image edge. Use a smaller padding size.
face_recognition mode sometimes finds random things as faces. It also requires a [CUDA] GPU.
Only detects one face. This is a design choice to make it's use easy.
Notes:
Detection runs in succession. If nothing is found with the selected detection cascades, it will try the next available cascades file.
Image Crop Location: Crop a image to specified location in top, left, right, and bottom locations relating to the pixel dimensions of the image in X and Y coordinats.
Image Crop Square Location: Crop a location by X/Y center, creating a square crop around that point.
Image Displacement Warp: Warp a image by a displacement map image by a given amplitude.
Image Dragan Photography Filter: Apply a Andrzej Dragan photography style to a image
Image Edge Detection Filter: Detect edges in a image
Image Film Grain: Apply film grain to a image
Image Filter Adjustments: Apply various image adjustments to a image
Image Flip: Flip a image horizontal, or vertical
Image Gradient Map: Apply a gradient map to a image
Image Generate Gradient: Generate a gradient map with desired stops and colors
Image High Pass Filter: Apply a high frequency pass to the image returning the details
Image History Loader: Load images from history based on the Load Image Batch node. Can define max history in config file. (requires restart to show last sessions files at this time)
Image Input Switch: Switch between two image inputs
Image Levels Adjustment: Adjust the levels of a image
Image Load: Load a image from any path on the system, or a url starting with
httpImage Median Filter: Apply a median filter to a image, such as to smooth out details in surfaces
Image Mix RGB Channels: Mix together RGB channels into a single iamge
Image Monitor Effects Filter: Apply various monitor effects to a image
Digital Distortion
A digital breakup distortion effect
Signal Distortion
A analog signal distortion effect on vertical bands like a CRT monitor
TV Distortion
A TV scanline and bleed distortion effect
Image Nova Filter: A image that uses a sinus frequency to break apart a image into RGB frequencies
Image Perlin Noise: Generate perlin noise
Image Perlin Power Fractal: Generate a perlin power fractal
Image Paste Face Crop: Paste face crop back on a image at it's original location and size
Features a better blending funciton than GFPGAN/CodeFormer so there shouldn't be visible seams, and coupled with Diffusion Result, looks better than GFPGAN/CodeFormer.
Image Paste Crop: Paste a crop (such as from Image Crop Location) at it's original location and size utilizing the
crop_datanode input. This uses a different blending algorithm then Image Paste Face Crop, which may be desired in certain instances.Image Power Noise: Generate power-law noise
frequency: The frequency parameter controls the distribution of the noise across different frequencies. In the context of Fourier analysis, higher frequencies represent fine details or high-frequency components, while lower frequencies represent coarse details or low-frequency components. Adjusting the frequency parameter can result in different textures and levels of detail in the generated noise. The specific range and meaning of the frequency parameter may vary depending on the noise type.
attenuation: The attenuation parameter determines the strength or intensity of the noise. It controls how much the noise values deviate from the mean or central value. Higher values of attenuation lead to more significant variations and a stronger presence of noise, while lower values result in a smoother and less noticeable noise. The specific range and interpretation of the attenuation parameter may vary depending on the noise type.
noise_type: The tyoe of Power-Law noise to generate (white, grey, pink, green, blue)
Image Paste Crop by Location: Paste a crop top a custom location. This uses the same blending algorithm as Image Paste Crop.
Image Pixelate: Turn a image into pixel art! Define the max number of colors, the pixelation mode, the random state, and max iterations, and max those sprites shine.
Image Remove Background (Alpha): Remove the background from a image by threshold and tolerance.
Image Remove Color: Remove a color from a image and replace it with another
Image Resize
Image Rotate: Rotate an image
Image Save: A save image node with format support and path support. (Bug: Doesn't display image
Image Seamless Texture: Create a seamless texture out of a image with optional tiling
Image Select Channel: Select a single channel of an RGB image
Image Select Color: Return the select image only on a black canvas
Image Shadows and Highlights: Adjust the shadows and highlights of an image
Image Size to Number: Get the
widthandheightof an input image to use with Number nodes.Image Stitch: Stitch images together on different sides with optional feathering blending between them.
Image Style Filter: Style a image with Pilgram instragram-like filters
Depends on
pilgrammodule
Image Threshold: Return the desired threshold range of a image
Image Tile: Split a image up into a image batch of tiles. Can be used with Tensor Batch to Image to select a individual tile from the batch.
Image Transpose
Image fDOF Filter: Apply a fake depth of field effect to an image
Image to Latent Mask: Convert a image into a latent mask
Image to Noise: Convert a image into noise, useful for init blending or init input to theme a diffusion.
Image to Seed: Convert a image to a reproducible seed
Image Voronoi Noise Filter
A custom implementation of the worley voronoi noise diagram
Input Switch (Disable until
*wildcard fix)KSampler (WAS): A sampler that accepts a seed as a node inpu
Load Cache: Load cached Latent, Tensor Batch (image), and Conditioning files.
Load Text File
Now supports outputting a dictionary named after the file, or custom input.
The dictionary contains a list of all lines in the file.
Load Batch Images
Increment images in a folder, or fetch a single image out of a batch.
Will reset it's place if the path, or pattern is changed.
pattern is a glob that allows you to do things like
**/*to get all files in the directory and subdirectory or things like*.jpgto select only JPEG images in the directory specified.
Mask to Image: Convert
MASKtoIMAGEMask Batch to Mask: Return a single mask from a batch of masks
Mask Invert: Invert a mask.
Mask Add: Add masks together.
Mask Subtract: Subtract from a mask by another.
Mask Dominant Region: Return the dominant region in a mask (the largest area)
Mask Minority Region: Return the smallest region in a mask (the smallest area)
Mask Arbitrary Region: Return a region that most closely matches the size input (size is not a direct representation of pixels, but approximate)
Mask Smooth Region: Smooth the boundaries of a mask
Mask Erode Region: Erode the boundaries of a mask
Mask Dilate Region: Dilate the boundaries of a mask
Mask Fill Region: Fill holes within the masks regions
Mask Ceiling Region": Return only white pixels within a offset range.
Mask Floor Region: Return the lower most pixel values as white (255)
Mask Threshold Region: Apply a thresholded image between a black value and white value
Mask Gaussian Region: Apply a Gaussian blur to the mask
Masks Combine Masks: Combine 2 or more masks into one mask.
Masks Combine Batch: Combine batched masks into one mask.
ComfyUI Loaders: A set of ComfyUI loaders that also output a string that contains the name of the model being loaded.
Latent Noise Injection: Inject latent noise into a latent image
Latent Size to Number: Latent sizes in tensor width/height
Latent Upscale by Factor: Upscale a latent image by a factor
Latent Input Switch: Switch between two latent inputs
Logic Boolean: A simple
1or0output to use with logicMiDaS Depth Approximation: Produce a depth approximation of a single image input
MiDaS Mask Image: Mask a input image using MiDaS with a desired color
Number Operation
Number to Seed
Number to Float
Number Input Switch: Switch between two number inputs
Number Input Condition: Compare between two inputs or against the A input
Number to Int
Number to String
Number to Text
Random Number
Save Text File: Save a text string to a file
Seed: Return a seed
Tensor Batch to Image: Select a single image out of a latent batch for post processing with filters
Text Add Tokens: Add custom tokens to parse in filenames or other text.
Text Add Token by Input: Add custom token by inputs representing single single line name and value of the token
Text Compare: Compare two strings. Returns a boolean if they are the same, a score of similarity, and the similarity or difference text.
Text Concatenate: Merge two strings
Text Dictionary Update: Merge two dictionaries
Text File History: Show previously opened text files (requires restart to show last sessions files at this time)
Text Find and Replace: Find and replace a substring in a string
Text Find and Replace by Dictionary: Replace substrings in a ASCII text input with a dictionary.
The dictionary keys are used as the key to replace, and the list of lines it contains chosen at random based on the seed.
Text Input Switch: Switch between two text inputs
Text List: Create a list of text strings
Text Concatenate: Merge lists of strings
Text Multiline: Write a multiline text string
Text Parse A1111 Embeddings: Convert embeddings filenames in your prompts to
embedding:[filename]]format based on your/ComfyUI/models/embeddings/files.Text Parse Noodle Soup Prompts: Parse NSP in a text input
Text Parse Tokens: Parse custom tokens in text.
Text Random Line: Select a random line from a text input string
Text String: Write a single line text string value
Text to Conditioning: Convert a text string to conditioning.
True Random.org Number Generator: Generate a truly random number online from atmospheric noise with Random.org
Write to Morph GIF: Write a new frame to an existing GIF (or create new one) with interpolation between frames.
Write to Video: Write a frame as you generate to a video (Best used with FFV1 for lossless images)
Extra Nodes
CLIPTextEncode (BlenderNeko Advanced + NSP): Only available if you have BlenderNeko's Advanced CLIP Text Encode. Allows for NSP and Wildcard use with their advanced CLIPTextEncode.
Video Nodes
Codecs
You can use codecs that are available to your ffmpeg binaries by adding their fourcc ID (in one string), and appropriate container extension to the was_suite_config.json
Example H264 Codecs (Defaults)
"ffmpeg_extra_codecs": {
"avc1": ".mp4",
"h264": ".mkv"
}
Notes
For now I am only supporting Windows installations for video nodes.
I do not have access to Mac or a stand-alone linux distro. If you get them working and want to PR a patch/directions, feel free.
Video nodes require FFMPEG. You should download the proper FFMPEG binaries for you system and set the FFMPEG path in the config file.
Additionally, if you want to use H264 codec need to download OpenH264 1.8.0 and place it in the root of ComfyUI (Example:
C:\ComfyUI_windows_portable).FFV1 will complain about invalid container. You can ignore this. The resulting MKV file is readable. I have not figured out what this issue is about. Documentaion tells me to use MKV, but it's telling me it's unsupported.
If you know how to resolve this, I'd love a PR
Write to Videonode should use a lossless video codec or when it copies frames, and reapplies compression, it will start expontentially ruining the starting frames run to run.
Text Tokens
Text tokens can be used in the Save Text File and Save Image nodes. You can also add your own custom tokens with the Text Add Tokens node.
The token name can be anything excluding the : character to define your token. It can also be simple Regular Expressions.
Built-in Tokens
[time]
The current system microtime
[time(
format_code)]The current system time in human readable format. Utilizing datetime formatting
Example:
[hostname]_[time]__[time(%Y-%m-%d__%I-%M%p)]would output: SKYNET-MASTER_1680897261__2023-04-07__07-54PM
[hostname]
The hostname of the system executing ComfyUI
[user]
The user that is executing ComfyUI
Other Features
Import AUTOMATIC1111 WebUI Styles
When using the latest builds of WAS Node Suite a was_suite_config.json file will be generated (if it doesn't exist). In this file you can setup a A1111 styles import.
Run ComfyUI to generate the new
/custom-nodes/was-node-suite-comfyui/was_Suite_config.jsonfile.Open the
was_suite_config.jsonfile with a text editor.Replace the
webui_stylesvalue fromNoneto the path of your A1111 styles file called styles.csv. Be sure to use double backslashes for Windows paths.Example
C:\\python\\stable-diffusion-webui\\styles.csv
Restart ComfyUI
Select a style with the
Prompt Styles Node.The first ASCII output is your positive prompt, and the second ASCII output is your negative prompt.
You can set webui_styles_persistent_update to true to update the WAS Node Suite styles from WebUI every start of ComfyUI
Recommended Installation:
If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes, was-node-suite-comfyui, and WAS_Node_Suite.py has write permissions.
Navigate to your
/ComfyUI/custom_nodes/folderRun
git clone https://github.com/WASasquatch/was-node-suite-comfyui/Navigate to your
was-node-suite-comfyuifolderPortable/venv:
Run
path/to/ComfUI/python_embeded/python.exe -m pip install -r requirements.txt
With system python
Run
pip install -r requirements.txt
Start ComfyUI
WAS Suite should uninstall legacy nodes automatically for you.
Tools will be located in the WAS Suite menu.
Alternate Installation:
If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes, and WAS_Node_Suite.py has write permissions.
Download
WAS_Node_Suite.pyMove the file to your
/ComfyUI/custom_nodes/folderWAS Node Suite will attempt install dependencies on it's own, but you may need to manually do so. The dependencies required are in the
requirements.txton this repo. See installation steps above.Start, or Restart ComfyUI
WAS Suite should uninstall legacy nodes automatically for you.
Tools will be located in the WAS Suite menu.
This method will not install the resources required for Image Crop Face node, and you'll have to download the ./res/ folder yourself.
Installing on Colab
Create a new cell and add the following code, then run the cell. You may need to edit the path to your custom_nodes folder. You can also use the colab hosted here
!git clone https://github.com/WASasquatch/was-node-suite-comfyui /content/ComfyUI/custom_nodes/was-node-suite-comfyuiRestart Colab Runtime (don't disconnect)
Tools will be located in the WAS Suite menu.
Github Repository: https://github.com/WASasquatch/was-node-suite-comfyui
⤠Hearts and đźď¸ Reviews let me know you want moarr! :3
Description
Updates:
Various patches to nodes.
ASCII is fully deprecated and new
TEXT_TYPEisSTRINGDependency changes, be sure to re-run the requirements.txt with the Python executable that ComfyUI uses (such as the one at
/python_embedded/python.exefor ComfyUI Portable)
New Nodes:
CLIPSeg Masking: Mask a image with CLIPSeg and return a raw mask
CLIPSeg Masking Batch: Create a batch image (from image inputs) and batch mask with CLIPSeg
Diffusers Hub Down-Loader: Download a diffusers model from the HuggingFace Hub and load it
Image Batch: Create one batch out of multiple batched tensors.
Image Crop Square Location: Crop a location by X/Y center, creating a square crop around that point.
Image Displacement Warp: Warp a image by a displacement map image by a given amplitude.
Image Perlin Noise: Generate perlin noise
Image Perlin Power Fractal: Generate a perlin power fractal
Image Paste Face Crop: Paste face crop back on a image at it's original location and size
Image Paste Crop: Paste a crop (such as from Image Crop Location) at it's original location and size utilizing the crop_data node input. This uses a different blending algorithm then Image Paste Face Crop, which may be desired in certain instances.
Image Power Noise: Generate power-law noise
Image Paste Crop by Location: Paste a crop top a custom location. This uses the same blending algorithm as Image Paste Crop.
Image Pixelate: Turn a image into pixel art! Define the max number of colors, the pixelation mode, the random state, and max iterations, and max those sprites shine.
Image Tile: Split a image up into a image batch of tiles. Can be used with Tensor Batch to Image to select a individual tile from the batch.
Image to Noise: Convert a image into noise, useful for init blending or init input to theme a diffusion.
Image to Seed: Convert a image to a reproducible seed
Mask Batch to Mask: Return a single mask from a batch of masks
Mask Invert: Invert a mask.
Mask Add: Add masks together.
Mask Subtract: Subtract from a mask by another.
Mask Ceiling Region": Return only white pixels within a offset range.
Mask Floor Region: Return the lower most pixel values as white (255)
Mask Threshold Region: Apply a thresholded image between a black value and white value
Mask Gaussian Region: Apply a Gaussian blur to the mask
Masks Combine Masks: Combine 2 or more masks into one mask.
Masks Combine Batch: Combine batched masks into one mask.
Text List: Create a list of text strings
Text Concatenate: Merge lists of strings
FAQ
Comments (46)
im planning to start learn using ComfyUI, thanx for the starterpack đ
It's amazing. I haven't gone back to A1111 except to train TIs. Haha
I ask you. yes, you can use it with the IPAD Tablet??
It has somewhat decent mobile support, but you'll have to run it on a compatible PC and then connect to it over wifi or the internet. A easy way to access it online is run it on a PC and then install ngrok. Then in cmd prompt or terminal you run ngrok http 8188 and it will give you back a public link to use.
I can't install correctly on win10. I had errors with install ffmpy and scikit-image modules
WAS-NS doesn't use ffmpy, it uses opencv-python-headless[ffmpeg] instead. So not sure why you're encountering that.
You have to run the requirements with the correct Python that ComfyUI is running on. So if you downloaded the portable version, you have to run the requirements with the python_embedded/python.exe executable.
Additionally you need cmake on your system and in your PATHS: https://cmake.org/download/
(IMPORT FAILED): D:\ComfyUI-master\custom_nodes\was-node-suite-comfyui
That doesn't help solve any issues. You need to provide the full stack error. Follow the installation instructions closely.
I can't select lbpcascade_animeface.xml nor haarcascade_profileface.xml in the Image Face Crop node. ComfyUI just doesn't see them. Any idea how to fix that?
Something may be wrong with your ComfyUI install possibly. They're hard-coded options for the selection so should be select-able unless something is wrong with the front end.
@WASÂ Turns out that the ComfyUI Custom Node Manager doesn't cooperate with your suite. After going through the entire "update through PowerShell commands" ordeal, I managed to fix it.
@SeriouslyMike Yeah, WAS-NS requires things that comfyui uses, so can't install properly if ComfyUI is running.
@WASÂ oh, that explains the update error in the manager.
Anyway, I'm trying to figure out the KSampler Cycle node - you haven't mentioned it in the description of the suite at all, so I have no idea what the controls do.
@SeriouslyMike I thought I pushed to the node descriptions? I did have a PR or two between so maybe it got reverted by accident cause I had a conflict with my own changes.
It's basically an HR pass node. You can put in upscale factor you want, and then how many upscale cycles it takes. Which is how many times it's diffused at incremental steps to final factor.
You can hook up a secondary diffusion model and secondary cycle step that the secondary model is used at. So if you have 2 upscale_cycles you would set secondary cycle to 2. If you had 4 upscale cycles you'd set it to 3 to make it start halfway.
You can scale down the denoising for each step based on the cycle_denoise and enabling scale_noise. So for a fresh diffusion with no unit image you would have starting_denoise at 1.0 and then could use say cycle_denoise 0.5, and with scaling it would decause the denoise each step starting at 0.5 at second cycle step.
You can define use latent upscaling, if enabled it will use selected sampling mode tondo upscaling. This bypasses the VAE input and doesn't decode and encode the image between each pass. This will bypass upscale models and scale sampling FYi though.
You can select a scale sampling mode and use that alone for upscaling, which will supersampler results in raster for better upscaling without a upscale model.
You can hook up a upscale model to use the upscale model for incremental upscaling, scale sampling is used to downscale the result before the next cycle step.
You can hook up a processor model which is just a upscale model that is 1x output, meant for stupp like dejpeg artifacts, or denoising, etc. This will run between each step, and before optional upscale model.
You can add a additive positive and negative prompt and desire strength. It strength scaling is active it will exponentially increase the strength to the cutoff value. Helpful for transitioning to another prompt. It uses the Combine Conditioning (Average) node behind the scenes. This can be used for all sorts of stuff. For example you could have main model be anime model, and secondary be realistic model, then use the additive prompts to describe 3D modifiers and negative stuff like anime, art, oil painting, etc. This will have the effect of transforming the anime image into like a 3D image.
Sorry if any errors. Wrote this on my phone. Not at the computer.
Definitely join the Element community listed on ComfyUI GitHub. I share example workflows and answer support on WAS-NS.
Do you have any instructions on how to use the "KSampler Cycle"? I am guessing it's some kind of all-in-one hires fix + upscaler node, but I need to know what the controls are doing.
How exactly do I uh... 'With system python, run pip install...'? My experience with python ends with installing it for automatic1111, so Im unfamiliar with uh, running it inside a specific folder?
Are you using windows? If so, I'd suggest downloading the portable package, which has the whole environment. Then follow directions for portable install. Clone it to custom_nodes, go into was-node-suite-comfyui folder, and then run `path/to/ComfUI/python_embeded/python.exe -s -m pip install -r requirements.txt` by changing the path/to/ComfyUI/python_embedded to the path to your python_embedded folder in your ComfyUI root.
https://github.com/WASasquatch/was-node-suite-comfyui#recommended-installation
@WASÂ Thanks for the reply. I've gotten a bit closer; finding out I can run python through command prompt and all that, but now I'm getting a 'Getting requirements to build wheel ... error' whenever I get to 'Collecting scipy', and nothing I've tried seems to fix that. Could I ask you to tell me where I could find a fix for this?
@Heckerman I'd need to see the full error. Maybe cmake isn't available. You can make an issue on GitHub. There's an issues tab on link above.
@WASÂ Yes I have no luck with this either, I use the Run command and the window errors out and closes so fast there is no way to even see it. I also am a newb with this stuff, but it seems like there should be an easier way. I realize Linux is god to tech bros, but I am not trying to learn a lifetime of IT coding, just wanted to try and get a custom node working on comfyUI. I already found a solution to the problem I had in the first place, so don't need a node for it anyway. I will try some of the other ones on here at a later time just out of curiosity, but this has frustrated me a bit :(
@MysteryWrecked This is usually way with Python in general. ComfyUI Manager will get an update hopefully soon to handle installations better. But I am making a install.bat cause it's frustrating dealing with everyone else's frustrations on a daily. Lol
@WAS I appreciate the effort, if I was a bit more advanced, I'm sure it would be no trouble, but I'm at the edge of my skillset here. I did get a couple custom nodes to install when I restarted ComfyUI again, and then I installed the node manager, which gave me WAS suite in a list to install. I did that installation, but still have to find out if it worked. Thanks for everything!
@MysteryWrecked I have pushed a install.bat that should hopefully solve the issues.
@WASÂ Sorry to be so late on this, I actually got it to work with the ComfyUI Manager at least, but the .bat is appreciated and will help lots of people!
@MysteryWrecked Not sure if you already know this, but if you click on the folder path in file explorer and type "cmd" (no quotation marks) in the folder you want to run cmd prompt in, it'll just start in that folder, no need to use cd. it saves me a lot of time since i can never remember that i have to use /d if i want to change drives with cd.
@Genie123Â That and Right Click -> Open in Terminal
@WASÂ I was a power user like you guys once, but then I took an arrow to the brain! In other words, I got old. Thanks for the tip!
@MysteryWrecked Yeah, no I hear ya there. Things have changed dramatically. I struggle to keep up, and Web Development, my old field of work, has completely changed pretty much.
I haven't been able to get this to work correctly. Windows 11. I've tried running as an admin, and by setting full permissions to the entire Comfy UI folder. It happens during the script to install the requirements. ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'A:\\comfyUI\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\python_embeded\\Lib\\site-packages\\cv2\\cv2.pyd'
Consider using the --user option or check the permissions.
I also never get the .json file referenced
Okay, I did a complete clean install of ComfyUI using the download tar archive, then set permissions, then ran the install requirements per your instructions before ever starting up comfy UI, and it worked out this time. I did also get the json file inside the WAS folder this time. So that part is good, but trying to install some of the other custom node packages using the manager, I get the following error:
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied: 'A:\\ComfyUI-July23\\python_embeded\\Lib\\site-packages\\cv2\\cv2.pyd'
Consider using the --user option or check the permissions.
So maybe it's the manager? I've set permissions for the entire ComfyUI folder to be able to modify and write so I don't understand the error for the other packages.
@EricRollei21Â Node Manger can't really install nodes that do anything involving packages that are running. Numpy, etc. You need to follow my instructions to install. Namely running requirements.txt through your embedded_python/python.exe.
ConfyUI Manager is only going to work for nodes that don't really need dependencies or use totally different dependencies from what ComfyUI has already imported. It's a cool tool but should be a stand alone tool to actually achieve what it advertises.
@WASÂ Thanks for the reply and of course the custom nodes! So I should run the python.exe install -r requirements.txt for every custom nodes folder that has the requirments.txt file in it? And to do that only when the ComfyUI isn't running right? The embeded-python install requirements only needs to be done once right? I think I've got it all running okay now - except for one minor thing - there is an undefined node (image scale) that has inputs of image, width, and height on left side and output of image on the right side.
@EricRollei21Â you need to run through python_embedded/python.exe in root of ConfyUI if you are using portable version
I have this issue, How do i fix it PLS.
Import times for custom nodes:
0.0 seconds: A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\Pseudo_HDR_ally.py
0.0 seconds: A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\Pseudo_HDR_ally.py
0.0 seconds: A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\efficiency-nodes-comfyui
0.0 seconds: A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
0.0 seconds (IMPORT FAILED): A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\was-node-suite-comfyui-main
0.0 seconds (IMPORT FAILED): A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\was-node-suite-comfyui-main
0.0 seconds: A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\Derfuu_ComfyUI_ModdedNodes
1.2 seconds: A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\efficiency-nodes-comfyui
83.6 seconds (IMPORT FAILED): A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
88.1 seconds (IMPORT FAILED): A:\AI_Files\ComfyUI_Main\ComfyUI\custom_nodes\ComfyUI-Impact-Pack
ModuleNotFoundError: No module named 'numba'
Install the nodes according to the instructions. You need to run the requirements with ComfyUI's python environment. So for portable you need to run requirements through python_embedded/python.exe
I get intimidated by your node pack. It feels overwhelming just looking at it and most of it I dont need atm. I'm looking for gif maker in comfyui, do you plan in creating/separating things in smaller node packs?
No
Good day to you, thank you for these awesome nodes!
I espacially live the save node. Is there a way we can push the seed/used sampler/steps and so on to the save note so I can have these informations in the name?
There's supposedly a way to capture the widget info and use it elsewhere, but I think there is an issue with custom widgets not being supported.
Is there anyway to get the Seed Node to function with "convert seed to input"? As it currently is, it only seems to connect to the WAS Sampler.
Yeah, that's cause it's very old and before the Utils -> Primitive node. You can just use that and input into a seed input and it'll transform into a seed.
Is there a way to clear custom tokens?
Every time I try to reuse my workflow I get this barrage of custom tokens I don't need anymore. Also no idea why, but for some reason it turns into this:
CheckpointName: checkpoint,
checkpoint: checkpoint,
etc.
So it keeps adding to the already existing custom token, but in a way that seems very weird/wrong to me.
I've integrated this example fileloader and custom token node to load my checkpoints via file easily: https://pastebin.com/HbUeWEYX
Right now it's just stuck and won't update the value anymore, I fixed it before by switching the token name, but that's not great.
Definitely one of the very best node sets for ComfyUI! Thank you very much for these great tools!
Thank you! I appreciate the kind words!
Hi! Could you please explain how to use Prompt Multiple Styles Selector in the workflow?
You've documented that ':' isn't allowed in token names, but is there any list of what's not allowed in token values? For instance '\' tends to throw 're' errors depending on the character that follows it (probably regex alchemy)
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.
