CivArchive
    WAS's ComfyUI Workspaces (HR-Fix and more!) - HR-Fix Bloom Workspace
    Preview 280008

    These are worksapces to load into ComfyUI for various tasks such as HR-Fix with AI Model Upscaling

    Note: WAS's Comprehensive Node Suite (WAS Node Suite) has a bloom filter now which works similar, except provides a high frequency pass to base the bloom off of. This is more accurate and used in screen-space bloom like in video games.

    Requirements:

    HR-Fix Usage:

    1. Extract "ComfyUI-HR-Fix_workspace.json" (or whatever the worksapce is called)

    2. Load workspace with the "Load" button in the right-hand menu and select "ComfyUI-HR-Fix_workspace.json"

    3. Select your desired diffusion model

    4. Select VAE model or use diffusion models vae

    5. Select your desired upscale model

    6. change prompt and sampling settings as seen fit.
      (currently v1 set to 512x768 x4= 2048x3072, v2 has a resize so final size is 1024x1536)

    Description

    Requires Filters Suite V3, and NSP CLIPTextEncode V2 nodes!

    Adjust Gaussian Blur radius to taste under "Image Filters (Bloom Adjustment)", optionally play with brightness and contrast for bloom area adjustments

    FAQ

    Comments (8)

    ctdde466Mar 19, 2023
    CivitAI

    That upscaling only works with low starting resolutions as these upscalers need loads of VRAM.

    If you have a decent resolution of 800x800 + you can just take the latent output of the first sampler and send that through the latent upscaler and then pass it to a second sampler.

    The upscale model itself will change the picture on its own, so an only latent upscale will be much closer to the original seed.

    WAS
    Author
    Mar 20, 2023

    Latent upscaling is absolutely terrible. I don't even use that in WebUI. It's why SD Super Resolution and Latent Upscalers just aren't a thing in the AI world, and we use AI Upscale models doing specific tasks, and excelling in different things. Upscale Models don't change images nearly as much as a latent space representation, or diffusion.

    poiGenAIJun 21, 2023

    @WAS Latent upscaling adds a lot of details that other upscaling methods don't. It's a gamble, but when it pays off it's great.

    WAS
    Author
    Jun 22, 2023

    @poiGenAI I think you are just seeing another process, such as noise injection and sampling. If you look at raw latent upscaled images, like you can in ComfyUI, they are substantially inferior to even just resizing the raster image and just encoding it. In fact I don't even use upscale models anymore cause resizing the image with supersampling I'm WAS-NS best preserves original image.

    poiGenAIJun 23, 2023

    @WAS When you say raw, I hope you don't mean the output of the upscale latent node. You have to resample it like you'd do with an upscaled image, of course. This is also what Auto's webUI seems to do when selecting latent as the upscaler. Here is the first generation I did with this setup while writing this comment:
    https://drive.google.com/drive/folders/19f2VGu9Fq9iSoRSUg-t-2o0laEJefkYu?usp=sharing
    (original, latent2x, img2imgupscale (done in comfyUI))

    You can argue about the quality of it. If you say it's lower, I'd agree on this specific example. But you cannot argue with the added detail. Especially the high-frequency details. Like I said, it's a gamble. Sometimes it will cut the subject in half or similar atrocities. If I cherrypicked a result, I would get an amazing latent upscaled image that is incomparable to an img2img upscale. I've also added examples which I did a while ago with the webUI latent upscale in there. What I usually do is latent upscale first, then if it doesn't work and I'm really attached to the image, I use other upscalers. If I'm not that attached to the image, I just generate until another latent upscale works. I can count the number of images that have made me give up latent upscaling with my fingers.

    The workflow json is also there even though it's simple. You can replace the ressources with what you have and use supersampling instead of PSNR like I have for the img2img. It's also important to note that "best preserves the original image" is not part of my criteria for quality. If the image produced is better, it's better. Change is not bad and if the best preservation is what you're after, you should not do any post-processing at all. This is why so many people's "HD" images actually look SD. They're trying to preserve an SD image.

    WAS
    Author
    Jun 24, 2023

    @poiGenAI That's simply because of VAE encoding, which isn't relating to scaling. Some VAEs perform better then others, but they muddle details, which then is interpreted by SD sampling as a smoother image. Also when VAE/SD sees fuzzyness from upscaling it interprets that as well, which could add to issues. I always supersample so it's downscaled from a larger resolution so pixel shapes and such are smoothed out. Similar to how AI model upscaling works since they're usually 4x and stuff, so it downscales to target 2x

    poiGenAIJun 25, 2023· 1 reaction

    @WAS Regardless of the reason, the end result is what matters. I've yet to see a workflow which produces better results than a 2x latent upscale when it works.

    GritAiArtJul 31, 2023
    CivitAI

    i don't use Image Filters (Edge Enhance) node.T^T
    i use WAS Node Suite v.2

    Other
    Other
    by WAS

    Details

    Downloads
    1,360
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/19/2023
    Updated
    5/13/2026
    Deleted
    -

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.