CivArchive
    hyperfusion_vpred finetune 3.3m images - v9 vpred
    NSFW
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined
    Preview undefined

    This checkpoint was trained on 3.3m images of normal to hyper sized anime characters. It focus mainly on breasts/ass/belly/thighs, but now handles more general tag topics as well. It's about 50%/50% anime and furry images as of v8. See change log article below for more version details, and future plans.

    Note: This will be my final SD1x model. I wanted to see what the hyperfusion dataset was really capable of on sd1.5. So I let it train on 2x3090s for 10 months to squeeze every bit of concept knowledge out of it. This is the best concept model Ive trained so far, but it still has the usual SD1x jankiness. I probably kept the Text Encoder LR too high for too long (0.5x -> 0.3x).

    Big shoutout to stuffer.ai for letting me host my model on their site to gather feedback. It was critical for resolving issues with the model early on, and a great way to see what needed improvement over time.


    V9 is a v_pred model, so you will need to use the YAML file in A1111, or the vpred node in Comfy along with cfg_rescale=0.6-0.8 in both. A1111 will also need the CFG_Rescale extension installed.

    I posted one old example using ComfyUI workflow here https://civarchive.com/images/64978187

    Other links:

    The OG hyperfusion LoRAs can be found here https://civarchive.com/models/16928
    Also a back up HuggingFace link for these models

    Uploaded 1.4 million custom tags used in hyperfusion here for integrating into your own datasets

    Changelog Article Link

    Recommendations for v9_vpred finetune:

    sampler: Anything that is not a Karras sampler. Don't use Karras! Training with --zero_terninal_snr makes that sampler problematic. Also you will need to use the uniform scheduler in A1111, or "simple,normal" in Comfy at least

    negative: I tested each of these tags separately to make sure they had a positive effect:

    worst quality, low rating, signature, artist name, artist logo, logo, unfinished, jpeg artifacts, artwork \(traditional\), sketch, horror, mutant, flat color, simple shading

    positive: "best quality, high rating" for the base style I trained into this model, more details in Training Data docs

    cfg: 7-9

    cfg_rescale: 0.6-0.8 rescale_cfg is required for this v_pred model. lower values tend to have less body horror, but darker images.

    resolution: 768-1024 (closer to 896 for less body horror)

    clip skip: 2

    zero_terminal_snr: Enabled

    styling: You will want to choose a style first. The default style is pretty meh. Try the new artist tags included in v8+, all tags can be found in the tags.csv by searching for "(artist)". See example images for art styles.

    Lora/TI: loras trained on other models will not work with this model, even loras trained on other v_pred models are not guaranteed to work here.

    Recommendations for v8 LoRA:

    sampler: Anything that is not a Karras sampler. Don't use Karras! Training with --zero_terninal_snr makes that sampler problematic.

    Lora/TI: If you are using LoRA's/TI's trained on NovelAI based models, they might do more harm than good. Try without them first.

    negative: low rating, lowres, text, signature, watermark, username, blurry, transparent background, ugly, sketch, unfinished, artwork \(traditional\), multiple views, flat color, simple shading, unfinished, rough sketch

    cfg: 8 (it needs less than LoRA hyperfusion) resolution: 768-1024 (closer to 768 for less body horror)

    clip skip: 2

    styling: Try the new artist tags included in v8, all tags can be found in the tags.csv by searching for "(artist)"


    Tag Info (You definitely want to read the tag docs, see :Training Data)


    Because hyperfusion is a conglomeration of multiple tagging schemes, I've included a tag guide in the training data download section. It will describe the way the tags work (similar to Danbooru tags), which tags the model knows best, and all my custom labeled tags.
    For the most part you can use a majority of tags from Danbooru, Gelbooru, r-34, e621, related to breasts/ass/belly/thighs/nipples/vore/body_shape.

    The best method I have found for tag exploration is going to one of the booru sites above and copying the tags from any image you like, and use them as a base. Because there are just too many tags trained into this model to test them all.

    Tips

    • Because of the size and variety of this dataset, tags tend to behave differently than most NovelAI based models. Keep in mind your prompts from other models, might need to be tweaked.

    • If you are not getting the results you expect from a tag, find other similar tags and include those as well. I've found that this model tends to spread its knowledge of a tag around to other related tags. So including more will increase your chances of getting what you want.

    • Using the negative "3d" does a good job of making the image more anime like if it starts veering too much into a rendered model look.

    • Ass related tags have a strong preference for back shots, try a low strength ControlNet pose to correct this, or try one or more of these in the negatives "ass focus, from behind, looking back". The new "ass visible from front" tag can help too.

    • ...more tips in tag docs

    Extra


    This model took me months of failures and plenty of lessons learned (hence v7)! I would eventually like to train a few more image classifiers to improve certain tags, but all future dreams for now.

    As usual, I have no intention of monetizing any of my models. Enjoy the thickness!


    -Tagging-

    The key to tagging a large dataset is to automate it all. I started with the wd-tagger (or similar danbooru tagger) to append some common tags on top of the original tags. Eventually I added an e621 tagger too, but I generally only tag with a limited set of tags and not the entire tag list (some tags are not accurate enough). Then I trained a handful of image classifiers like breast size, breasts shape, innie/outie navel, directionality, motion lines, and about 20 others..., and let those tag for me. They not only improve on existing tags, but add completely new concepts to the dataset. Finally I converted similar tags into one single tag as described in the tag docs (I stopped doing this now. With 3m images it really doesn't matter as much).

    Basically any time I find its hard to prompt for a specific thing, I throw together a new classifier, and so far the only ones that don't work well are ones that try to classify small details in the image, like signatures.

    Starting in v9 I will be including ~10% captions along side the tags. These captions are generated with CogVLM.

    I used this to train my image classifiers
    https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification

    Ideally, I should train a multi-class-per-image classifier like the Danbooru tagger, but for now these single class-per-image classifiers work well enough.

    -Software/Hardware-

    The training was all done on a 3090 on Ubuntu. The software used is Kohya's trainer, since it currently has the most options to choose from.

    Description

    This version of hyperfusion was trained on 3.3 million images over 10 months, and is a v_prediction + zero_snr model based on SD1.5.

    This version was trained on SD 1.5, so there is no NovelAI influence in this checkpoint.

    More image classifiers trained, and existing classifiers improved (list of classified tags under Training Data section)

    Training Notes:

    • ~3.3m images

    • LR 4e-6

    • TE_LR 1e-6, droped to 1e-7 (after epoch 10)

    • batch 8

    • GA 16

    • 2x3090s so 2x the base batch size. total v_batch = 256

    • total images seen: 190_000 * 256 = 48_000_000

    • AdamW-8bit (ADOPT for the last epoch as a test)

    • scheduler: linear

    • base model SD1.5

    • No custom VAE, usually use the original SD1.5 VAE

    • flip aug

    • clip skip 2

    • 525 token length (appending captions + tags made this necessary)

    • bucketing at 768 max 1024

      • bucket resolution steps 32 for more buckets

      • trained at 768 for the first 10 epochs, and 1024 for the last 6

    • tag drop chance 0.15

    • caption_dropout 0.1

    • tag shuffling

    • --min_snr_gamma 3

    • --ip_noise_gamma 0.02

    • --zero_terninal_snr

    • about 10 months training time

    Custom training configs:

    I have implemented a number of things into Kohys's training code that have been suggested to improve training, and kept the things that seemed to make improvements.

    • drop out 75% of tags 5% of the time to hopefully improve short tag length results

    • soft_min_snr instead of min_snr

    • --no_flip_when_cap_matches: Prevent flipping images when certain tags exists like "sequence, asymmetrical, before and after, text on*, written, speech bubble" etc... This should help with text, and characters with asymmetrical features.

    • --important_tags: move important tags to the beginning of the list, and sort them separately from the unimportant ones (suggested from NovelAI if I remember correctly).

    • --tag_implication_dropout: Dropout similar tags to prevent the model from requiring them both to be present when generating. Like "breasts, big breasts" breasts will be dropped out 30-50% of the time. I used the tag implications csv from e621 as a base and added tags as needed. Even with 10%-15% tag dropout, some tag pairs were still being associated too often, this definitely made a difference. I think there were about 5k tags in total on the dropout list.

    • 12% of the dataset is captioned with CogVLM, as well as cleaning up many of the captions with custom scripts that correct common problems.

    • Tags vs Captions: 70% of the time use tags, ~20% of the time use captions (if they exist), 10% of the time combine tags with captions in different orders.

    If I remember more custom changes, ill add them later.

    FAQ

    Comments (21)

    216396Dec 16, 2024
    CivitAI

    Any chance for a Pony version of this please?

    throwawayjm
    Author
    Dec 16, 2024

    If I do another finetune it will probably be on Noobai_vpred (once its done). And probably a more limited dataset size. I imagine training this dataset on PonyXL would take over a year to do.

    For sure there will be a noobai_vpred Dora for hyperfusion. But a full finetune on it is still up in the air.

    tombotDec 19, 2024
    CivitAI

    This is definately the best version yet, but one problem I keep running into is that the skin occasionally turns light purple and I don't know what is causing that or how to stop it.

    Edit: I added "purple skin" to the negative prompt and that seemed to have done something, I'm still curious as to what causes it.

    throwawayjm
    Author
    Dec 19, 2024

    I remember seeing it happen a few times from other people's generations on stuffer-ai, but Im not sure what was causing it. could be generation parameters, or some tag weights too high? I personally have not managed to re-create it myself yet.

    goopy_gunkusDec 20, 2024
    CivitAI

    By any chance does anyone have a comfyui setup for this model. I tried setting it up but the images look all jumbly.

    throwawayjm
    Author
    Dec 21, 2024

    edit: example https://civitai.com/images/64978187

    I haven't used it in ComfyUI but at the very least you need:
    Nodes:
    ModelSamplingDiscrete: with v_prediciton and zsnr enabled
    RescaleConfig: set to 0.8
    KSampler: with normal scheduler, and a non kerras sampler

    Anything outside of that I'm not sure of.

    GenesisProductionsMar 19, 2025

    Not sure if this will work but here is a copy of the .JSON for my basic workflow -

    {

    "workflow": {

    "last_node_id": 35,

    "last_link_id": 32,

    "nodes": [

    {

    "id": 14,

    "type": "ModelSamplingDiscrete",

    "pos": [

    -163,

    69

    ],

    "size": [

    315,

    82

    ],

    "flags": {},

    "order": 2,

    "mode": 0,

    "inputs": [

    {

    "name": "model",

    "type": "MODEL",

    "link": 11

    }

    ],

    "outputs": [

    {

    "name": "MODEL",

    "type": "MODEL",

    "shape": 3,

    "links": [

    12

    ]

    }

    ],

    "properties": {

    "Node name for S&R": "ModelSamplingDiscrete"

    },

    "widgets_values": [

    "v_prediction",

    true

    ]

    },

    {

    "id": 15,

    "type": "RescaleCFG",

    "pos": [

    535,

    -71

    ],

    "size": [

    315,

    58

    ],

    "flags": {},

    "order": 4,

    "mode": 0,

    "inputs": [

    {

    "name": "model",

    "type": "MODEL",

    "link": 12,

    "slot_index": 0

    }

    ],

    "outputs": [

    {

    "name": "MODEL",

    "type": "MODEL",

    "shape": 3,

    "links": [

    13

    ],

    "slot_index": 0

    }

    ],

    "properties": {

    "Node name for S&R": "RescaleCFG"

    },

    "widgets_values": [

    0.8

    ]

    },

    {

    "id": 16,

    "type": "CLIPSetLastLayer",

    "pos": [

    -245,

    448

    ],

    "size": [

    315,

    58

    ],

    "flags": {

    "collapsed": false

    },

    "order": 3,

    "mode": 0,

    "inputs": [

    {

    "name": "clip",

    "type": "CLIP",

    "link": 16

    }

    ],

    "outputs": [

    {

    "name": "CLIP",

    "type": "CLIP",

    "shape": 3,

    "links": [

    20,

    21

    ],

    "slot_index": 0

    }

    ],

    "properties": {

    "Node name for S&R": "CLIPSetLastLayer"

    },

    "widgets_values": [

    -2

    ]

    },

    {

    "id": 8,

    "type": "VAEDecode",

    "pos": [

    1845,

    130

    ],

    "size": [

    210,

    46

    ],

    "flags": {},

    "order": 8,

    "mode": 0,

    "inputs": [

    {

    "name": "samples",

    "type": "LATENT",

    "link": 7

    },

    {

    "name": "vae",

    "type": "VAE",

    "link": 8

    }

    ],

    "outputs": [

    {

    "name": "IMAGE",

    "type": "IMAGE",

    "shape": 3,

    "links": [

    24

    ],

    "slot_index": 0

    }

    ],

    "properties": {

    "Node name for S&R": "VAEDecode"

    },

    "widgets_values": []

    },

    {

    "id": 23,

    "type": "PreviewImage",

    "pos": [

    2434,

    233

    ],

    "size": [

    1043.43896484375,

    1074.170654296875

    ],

    "flags": {},

    "order": 9,

    "mode": 0,

    "inputs": [

    {

    "name": "images",

    "type": "IMAGE",

    "link": 24

    }

    ],

    "outputs": [],

    "properties": {

    "Node name for S&R": "PreviewImage"

    },

    "widgets_values": []

    },

    {

    "id": 22,

    "type": "BNK_CLIPTextEncodeAdvanced",

    "pos": [

    537,

    411

    ],

    "size": [

    400,

    200

    ],

    "flags": {},

    "order": 6,

    "mode": 0,

    "inputs": [

    {

    "name": "clip",

    "type": "CLIP",

    "link": 21

    }

    ],

    "outputs": [

    {

    "name": "CONDITIONING",

    "type": "CONDITIONING",

    "links": [

    22

    ],

    "slot_index": 0

    }

    ],

    "properties": {

    "Node name for S&R": "BNK_CLIPTextEncodeAdvanced"

    },

    "widgets_values": [

    "child, kid, very young, loli, baby, worst quality, low rating, (signature), (bad hands:2), (bad arms:1.8), (bad legs:2), (more than 2 legs:2), (more than 2 arms:1.6), (background:3), more than 1 navel, (bad face:1.6), dark, (scenary:2), (background imagery:2)",

    "none",

    "A1111"

    ]

    },

    {

    "id": 5,

    "type": "EmptyLatentImage",

    "pos": [

    539,

    714

    ],

    "size": [

    315,

    106

    ],

    "flags": {},

    "order": 0,

    "mode": 0,

    "inputs": [],

    "outputs": [

    {

    "name": "LATENT",

    "type": "LATENT",

    "shape": 3,

    "links": [

    32

    ],

    "slot_index": 0

    }

    ],

    "properties": {

    "Node name for S&R": "EmptyLatentImage"

    },

    "widgets_values": [

    1024,

    1024,

    1

    ]

    },

    {

    "id": 21,

    "type": "BNK_CLIPTextEncodeAdvanced",

    "pos": [

    543,

    161

    ],

    "size": [

    400,

    200

    ],

    "flags": {},

    "order": 5,

    "mode": 0,

    "inputs": [

    {

    "name": "clip",

    "type": "CLIP",

    "link": 20

    }

    ],

    "outputs": [

    {

    "name": "CONDITIONING",

    "type": "CONDITIONING",

    "links": [

    23

    ],

    "slot_index": 0

    }

    ],

    "properties": {

    "Node name for S&R": "BNK_CLIPTextEncodeAdvanced"

    },

    "widgets_values": [

    "best quality, high rating, solo, (perfect anatomy:1.6), photorealistic, detailed, gorgeous face",

    "none",

    "A1111"

    ]

    },

    {

    "id": 4,

    "type": "CheckpointLoaderSimple",

    "pos": [

    -755,

    240

    ],

    "size": [

    315,

    98

    ],

    "flags": {},

    "order": 1,

    "mode": 0,

    "inputs": [],

    "outputs": [

    {

    "name": "MODEL",

    "type": "MODEL",

    "shape": 3,

    "links": [

    11

    ]

    },

    {

    "name": "CLIP",

    "type": "CLIP",

    "shape": 3,

    "links": [

    16

    ],

    "slot_index": 1

    },

    {

    "name": "VAE",

    "type": "VAE",

    "shape": 3,

    "links": [

    8

    ]

    }

    ],

    "properties": {

    "Node name for S&R": "CheckpointLoaderSimple"

    },

    "widgets_values": [

    "hyperfusionVpred_v9Vpred.safetensors"

    ]

    },

    {

    "id": 3,

    "type": "KSampler",

    "pos": [

    1302,

    291

    ],

    "size": [

    315,

    474

    ],

    "flags": {},

    "order": 7,

    "mode": 0,

    "inputs": [

    {

    "name": "model",

    "type": "MODEL",

    "link": 13

    },

    {

    "name": "positive",

    "type": "CONDITIONING",

    "link": 23

    },

    {

    "name": "negative",

    "type": "CONDITIONING",

    "link": 22

    },

    {

    "name": "latent_image",

    "type": "LATENT",

    "link": 32

    }

    ],

    "outputs": [

    {

    "name": "LATENT",

    "type": "LATENT",

    "shape": 3,

    "links": [

    7

    ]

    }

    ],

    "properties": {

    "Node name for S&R": "KSampler"

    },

    "widgets_values": [

    632559746,

    "fixed",

    40,

    8,

    "heunpp2",

    "ddim_uniform",

    1

    ]

    }

    ],

    "links": [

    [

    7,

    3,

    0,

    8,

    0,

    "LATENT"

    ],

    [

    8,

    4,

    2,

    8,

    1,

    "VAE"

    ],

    [

    11,

    4,

    0,

    14,

    0,

    "MODEL"

    ],

    [

    12,

    14,

    0,

    15,

    0,

    "MODEL"

    ],

    [

    13,

    15,

    0,

    3,

    0,

    "MODEL"

    ],

    [

    16,

    4,

    1,

    16,

    0,

    "CLIP"

    ],

    [

    20,

    16,

    0,

    21,

    0,

    "CLIP"

    ],

    [

    21,

    16,

    0,

    22,

    0,

    "CLIP"

    ],

    [

    22,

    22,

    0,

    3,

    2,

    "CONDITIONING"

    ],

    [

    23,

    21,

    0,

    3,

    1,

    "CONDITIONING"

    ],

    [

    24,

    8,

    0,

    23,

    0,

    "IMAGE"

    ],

    [

    32,

    5,

    0,

    3,

    3,

    "LATENT"

    ]

    ],

    "groups": [],

    "config": {},

    "extra": {

    "ds": {

    "scale": 0.9090909090909091,

    "offset": [

    -1409.309659797268,

    -105.16241912057733

    ]

    },

    "node_versions": {

    "comfy-core": "0.3.26",

    "ComfyUI_ADV_CLIP_emb": "63984deefb005da1ba90a1175e21d91040da38ab"

    }

    },

    "version": 0.4

    },

    "prompt": {

    "3": {

    "inputs": {

    "seed": "%%_COMFYFIXME_${seed:632559746}_ENDFIXME_%%",

    "steps": "%%_COMFYFIXME_${steps:40}_ENDFIXME_%%",

    "cfg": "%%_COMFYFIXME_${cfg_scale:8}_ENDFIXME_%%",

    "sampler_name": "${sampler:heunpp2}",

    "scheduler": "${scheduler:ddim_uniform}",

    "denoise": "%%_COMFYFIXME_${comfyrawworkflowinputdecimalksamplernodedenoised:1}_ENDFIXME_%%",

    "model": [

    "15",

    0

    ],

    "positive": [

    "21",

    0

    ],

    "negative": [

    "22",

    0

    ],

    "latent_image": [

    "5",

    0

    ]

    },

    "class_type": "KSampler",

    "_meta": {

    "title": "KSampler"

    }

    },

    "4": {

    "inputs": {

    "ckpt_name": "${model:hyperfusionVpred_v9Vpred.safetensors}"

    },

    "class_type": "CheckpointLoaderSimple",

    "_meta": {

    "title": "Load Checkpoint"

    }

    },

    "5": {

    "inputs": {

    "width": "%%_COMFYFIXME_${width:1024}_ENDFIXME_%%",

    "height": "%%_COMFYFIXME_${height:1024}_ENDFIXME_%%",

    "batch_size": "%%_COMFYFIXME_${batchsize:1}_ENDFIXME_%%"

    },

    "class_type": "EmptyLatentImage",

    "_meta": {

    "title": "Empty Latent Image"

    }

    },

    "8": {

    "inputs": {

    "samples": [

    "3",

    0

    ],

    "vae": [

    "4",

    2

    ]

    },

    "class_type": "VAEDecode",

    "_meta": {

    "title": "VAE Decode"

    }

    },

    "14": {

    "inputs": {

    "sampling": "${comfyrawworkflowinputdropdownmodelsamplingdiscretenodesamplingo:v_prediction}",

    "zsnr": true,

    "model": [

    "4",

    0

    ]

    },

    "class_type": "ModelSamplingDiscrete",

    "_meta": {

    "title": "ModelSamplingDiscrete"

    }

    },

    "15": {

    "inputs": {

    "multiplier": "%%_COMFYFIXME_${comfyrawworkflowinputdecimalrescalecfgnodemultiplierp:0.8}_ENDFIXME_%%",

    "model": [

    "14",

    0

    ]

    },

    "class_type": "RescaleCFG",

    "_meta": {

    "title": "RescaleCFG"

    }

    },

    "16": {

    "inputs": {

    "stop_at_clip_layer": "%%_COMFYFIXME_${comfyrawworkflowinputintegerclipsetlastlayernodestopatcliplayerq:-2}_ENDFIXME_%%",

    "clip": [

    "4",

    1

    ]

    },

    "class_type": "CLIPSetLastLayer",

    "_meta": {

    "title": "CLIP Set Last Layer"

    }

    },

    "21": {

    "inputs": {

    "text": "${comfyrawworkflowinputtextpositivepromptnodetextv:best quality, high rating, solo, (perfect anatomy:1.6), photorealistic, detailed, gorgeous face, gorgeous brown eyes, blue-black hair, long hair, Japanese, pale makeup, opulent silk kimono, patterned kimono, hyper obese, geisha, saggy fat breasts, breasts on belly, cellulite, belly bursting out of clothes, bellyheavy, (three fold navel:1.2), fat folds, immobile belly, cleavage, (anthro, pig girl:1.4), shocked, hands in the air}",

    "token_normalization": "${comfyrawworkflowinputdropdownpositivepromptnodetokennormalizationv:none}",

    "weight_interpretation": "${comfyrawworkflowinputdropdownpositivepromptnodeweightinterpretationv:A1111}",

    "clip": [

    "16",

    0

    ]

    },

    "class_type": "BNK_CLIPTextEncodeAdvanced",

    "_meta": {

    "title": "CLIP Text Encode (Advanced)"

    }

    },

    "22": {

    "inputs": {

    "text": "${comfyrawworkflowinputtextnegativepromptnodetextw:child, kid, very young, loli, baby, worst quality, low rating, (signature), (bad hands:2), (bad arms:1.8), (bad legs:2), (more than 2 legs:2), (more than 2 arms:1.6), (background:3), more than 1 navel, (bad face:1.6), dark, (scenary:2), (background imagery:2)}",

    "token_normalization": "${comfyrawworkflowinputdropdownnegativepromptnodetokennormalizationw:none}",

    "weight_interpretation": "${comfyrawworkflowinputdropdownnegativepromptnodeweightinterpretationw:A1111}",

    "clip": [

    "16",

    0

    ]

    },

    "class_type": "BNK_CLIPTextEncodeAdvanced",

    "_meta": {

    "title": "CLIP Text Encode (Advanced)"

    }

    },

    "200": {

    "inputs": {

    "images": [

    "8",

    0

    ]

    },

    "class_type": "SwarmSaveImageWS",

    "_meta": {

    "title": "Preview Image"

    }

    }

    },

    "custom_params": {

    "model": {

    "name": "Model",

    "id": "model",

    "description": "What main checkpoint model should be used.",

    "type": "model",

    "subtype": "Stable-Diffusion",

    "default": "hyperfusionVpred_v9Vpred",

    "min": 0,

    "max": 0,

    "view_min": 0,

    "view_max": 0,

    "step": 1,

    "values": null,

    "value_names": null,

    "examples": null,

    "visible": false,

    "advanced": false,

    "feature_flag": null,

    "toggleable": false,

    "priority": 10,

    "group": null,

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "small",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "seed": {

    "name": "Seed",

    "id": "seed",

    "description": "Image seed.\n-1 = random.\nDifferent seeds produce different results for the same prompt.",

    "type": "integer",

    "subtype": null,

    "default": 632559746,

    "min": -1,

    "max": 9223372036854776000,

    "view_min": 0,

    "view_max": 0,

    "step": 1,

    "values": null,

    "value_names": null,

    "examples": [

    "1",

    "2",

    "...",

    "10"

    ],

    "visible": true,

    "advanced": false,

    "feature_flag": null,

    "toggleable": false,

    "priority": -30,

    "group": {

    "name": "Core Parameters",

    "id": "coreparameters",

    "toggles": false,

    "open": true,

    "priority": -50,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "seed",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "steps": {

    "name": "Steps",

    "id": "steps",

    "description": "Diffusion works by running a model repeatedly to slowly build and then refine an image.\nThis parameter is how many times to run the model.\nMore steps = better quality, but more time.\n20 is a good baseline for speed, 40 is good for maximizing quality.\nSome models, such as Turbo models, are intended for low step counts like 4 or 8.\nYou can go much higher, but it quickly becomes pointless above 70 or so.\nNote that steps is a core parameter used for defining diffusion schedules and other advanced internals,\nand merely running the model over top of an existing image is not the same as increasing the steps.\nNote that the number of steps actually ran can be influenced by other parameters such as Init Image Creativity when applied.",

    "type": "integer",

    "subtype": null,

    "default": 40,

    "min": 0,

    "max": 500,

    "view_min": 0,

    "view_max": 100,

    "step": 1,

    "values": null,

    "value_names": null,

    "examples": [

    "10",

    "15",

    "20",

    "30",

    "40"

    ],

    "visible": true,

    "advanced": false,

    "feature_flag": null,

    "toggleable": false,

    "priority": -20,

    "group": {

    "name": "Core Parameters",

    "id": "coreparameters",

    "toggles": false,

    "open": true,

    "priority": -50,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "slider",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "cfgscale": {

    "name": "CFG Scale",

    "id": "cfgscale",

    "description": "How strongly to scale prompt input.\nHigher CFG scales tend to produce more contrast, and lower CFG scales produce less contrast.\nToo-high values can cause corrupted/burnt images, too-low can cause nonsensical images.\n7 is a good baseline. Normal usages vary between 4 and 9.\nSome model types, such as Flux, Hunyuan Video, or any Turbo model, expect CFG to be set to 1.",

    "type": "decimal",

    "subtype": null,

    "default": 8,

    "min": 0,

    "max": 100,

    "view_min": 0,

    "view_max": 20,

    "step": 0.5,

    "values": null,

    "value_names": null,

    "examples": [

    "5",

    "6",

    "7",

    "8",

    "9"

    ],

    "visible": true,

    "advanced": false,

    "feature_flag": null,

    "toggleable": false,

    "priority": -18,

    "group": {

    "name": "Core Parameters",

    "id": "coreparameters",

    "toggles": false,

    "open": true,

    "priority": -50,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "slider",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "aspectratio": {

    "name": "Aspect Ratio",

    "id": "aspectratio",

    "description": "Image aspect ratio - that is, the shape of the image (wide vs square vs tall).\nSet to 'Custom' to define a manual width/height instead.\nSome models can stretch better than others.\nNotably Flux models support almost any resolution you feel like trying.",

    "type": "dropdown",

    "subtype": null,

    "default": "Custom",

    "min": 0,

    "max": 0,

    "view_min": 0,

    "view_max": 0,

    "step": 1,

    "values": [

    "1:1",

    "4:3",

    "3:2",

    "8:5",

    "16:9",

    "21:9",

    "3:4",

    "2:3",

    "5:8",

    "9:16",

    "9:21",

    "Custom"

    ],

    "value_names": [

    "1:1 (Square)",

    "4:3 (Old PC)",

    "3:2 (Semi-wide)",

    "8:5",

    "16:9 (Standard Widescreen)",

    "21:9 (Ultra-Widescreen)",

    "3:4",

    "2:3 (Semi-tall)",

    "5:8",

    "9:16 (Tall)",

    "9:21 (Ultra-Tall)",

    "Custom"

    ],

    "examples": null,

    "visible": true,

    "advanced": false,

    "feature_flag": null,

    "toggleable": false,

    "priority": -11,

    "group": {

    "name": "Resolution",

    "id": "resolution",

    "toggles": false,

    "open": false,

    "priority": -11,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "small",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "width": {

    "name": "Width",

    "id": "width",

    "description": "Image width, in pixels.\nSDv1 uses 512, SDv2 uses 768, SDXL prefers 1024.\nSome models allow variation within a range (eg 512 to 768) but almost always want a multiple of 64.\nFlux is very open to differing values.",

    "type": "integer",

    "subtype": null,

    "default": 1024,

    "min": 64,

    "max": 16384,

    "view_min": 256,

    "view_max": 2048,

    "step": 32,

    "values": null,

    "value_names": null,

    "examples": [

    "512",

    "768",

    "1024"

    ],

    "visible": true,

    "advanced": false,

    "feature_flag": null,

    "toggleable": false,

    "priority": -10,

    "group": {

    "name": "Resolution",

    "id": "resolution",

    "toggles": false,

    "open": false,

    "priority": -11,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "pot_slider",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "height": {

    "name": "Height",

    "id": "height",

    "description": "Image height, in pixels.\nSDv1 uses 512, SDv2 uses 768, SDXL prefers 1024.\nSome models allow variation within a range (eg 512 to 768) but almost always want a multiple of 64.\nFlux is very open to differing values.",

    "type": "integer",

    "subtype": null,

    "default": 1024,

    "min": 64,

    "max": 16384,

    "view_min": 256,

    "view_max": 2048,

    "step": 32,

    "values": null,

    "value_names": null,

    "examples": [

    "512",

    "768",

    "1024"

    ],

    "visible": true,

    "advanced": false,

    "feature_flag": null,

    "toggleable": false,

    "priority": -9,

    "group": {

    "name": "Resolution",

    "id": "resolution",

    "toggles": false,

    "open": false,

    "priority": -11,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "pot_slider",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "sampler": {

    "name": "Sampler",

    "id": "sampler",

    "description": "Sampler type (for ComfyUI backends).\nGenerally, 'Euler' is fine, but for SD1 and SDXL 'DPM++ 2M' is popular when paired with the 'Karras' scheduler.\n'Ancestral' and 'SDE' samplers only work with non-rectified models (eg SD1/SDXL) and randomly move over time.\nSome special model variants require specific Samplers or Schedulers.\n'CFG++' samplers have a different CFG range than normal (between 0 to 2, depending).",

    "type": "dropdown",

    "subtype": null,

    "default": "heunpp2",

    "min": 0,

    "max": 0,

    "view_min": 0,

    "view_max": 0,

    "step": 1,

    "values": [

    "euler",

    "euler_ancestral",

    "heun",

    "heunpp2",

    "dpm_2",

    "dpm_2_ancestral",

    "lms",

    "dpm_fast",

    "dpm_adaptive",

    "dpmpp_2s_ancestral",

    "dpmpp_sde",

    "dpmpp_sde_gpu",

    "dpmpp_2m",

    "dpmpp_2m_sde",

    "dpmpp_2m_sde_gpu",

    "dpmpp_3m_sde",

    "dpmpp_3m_sde_gpu",

    "ddim",

    "ddpm",

    "lcm",

    "uni_pc",

    "uni_pc_bh2",

    "res_multistep",

    "ipndm",

    "ipndm_v",

    "deis",

    "gradient_estimation",

    "euler_cfg_pp",

    "euler_ancestral_cfg_pp",

    "dpmpp_2m_cfg_pp",

    "dpmpp_2s_ancestral_cfg_pp",

    "res_multistep_cfg_pp",

    "res_multistep_ancestral",

    "res_multistep_ancestral_cfg_pp"

    ],

    "value_names": [

    "Euler",

    "Euler Ancestral (Randomizing)",

    "Heun (2x Slow)",

    "Heun++ 2 (2x Slow)",

    "DPM-2 (Diffusion Probabilistic Model) (2x Slow)",

    "DPM-2 Ancestral (2x Slow)",

    "LMS (Linear Multi-Step)",

    "DPM Fast (DPM without the DPM2 slowdown)",

    "DPM Adaptive (Dynamic Steps)",

    "DPM++ 2S Ancestral (2nd Order Single-Step) (2x Slow)",

    "DPM++ SDE (Stochastic / randomizing) (2x Slow)",

    "DPM++ SDE, GPU Seeded (2x Slow)",

    "DPM++ 2M (2nd Order Multi-Step)",

    "DPM++ 2M SDE",

    "DPM++ 2M SDE, GPU Seeded",

    "DPM++ 3M SDE (3rd Order Multi-Step)",

    "DPM++ 3M SDE, GPU Seeded",

    "DDIM (Denoising Diffusion Implicit Models) (Identical to Euler)",

    "DDPM (Denoising Diffusion Probabilistic Models)",

    "LCM (for LCM models)",

    "UniPC (Unified Predictor-Corrector)",

    "UniPC BH2",

    "Res MultiStep (for Cosmos)",

    "iPNDM (Improved Pseudo-Numerical methods for Diffusion Models)",

    "iPNDM-V (Variable-Step)",

    "DEIS (Diffusion Exponential Integrator Sampler)",

    "Gradient Estimation (Improving from Optimization Perspective)",

    "Euler CFG++ (Manifold-constrained CFG)",

    "Euler Ancestral CFG++",

    "DPM++ 2M CFG++",

    "DPM++ 2S Ancestral CFG++ (2x Slow)",

    "Res MultiStep CFG++"

    ],

    "examples": null,

    "visible": true,

    "advanced": false,

    "feature_flag": "comfyui",

    "toggleable": true,

    "priority": -5,

    "group": {

    "name": "Sampling",

    "id": "sampling",

    "toggles": false,

    "open": false,

    "priority": -8,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "small",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "scheduler": {

    "name": "Scheduler",

    "id": "scheduler",

    "description": "Scheduler type (for ComfyUI backends).\nGoes with the Sampler parameter above.",

    "type": "dropdown",

    "subtype": null,

    "default": "ddim_uniform",

    "min": 0,

    "max": 0,

    "view_min": 0,

    "view_max": 0,

    "step": 1,

    "values": [

    "normal",

    "karras",

    "exponential",

    "simple",

    "ddim_uniform",

    "sgm_uniform",

    "turbo",

    "align_your_steps",

    "beta",

    "linear_quadratic",

    "ltxv",

    "ltxv-image",

    "kl_optimal"

    ],

    "value_names": [

    "Normal",

    "Karras",

    "Exponential",

    "Simple",

    "DDIM Uniform",

    "SGM Uniform",

    "Turbo (for turbo models, max 10 steps)",

    "Align Your Steps (NVIDIA, rec. 10 steps)",

    "Beta",

    "Linear Quadratic (Mochi)",

    "LTX-Video",

    "LTXV-Image",

    "KL Optimal (Nvidia AYS)"

    ],

    "examples": null,

    "visible": true,

    "advanced": false,

    "feature_flag": "comfyui",

    "toggleable": true,

    "priority": -4,

    "group": {

    "name": "Sampling",

    "id": "sampling",

    "toggles": false,

    "open": false,

    "priority": -8,

    "description": "",

    "advanced": false,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "small",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "batchsize": {

    "name": "Batch Size",

    "id": "batchsize",

    "description": "Batch size - generates more images at once on a single GPU.\nThis increases VRAM usage.\nMay in some cases increase overall speed by a small amount (runs slower to get the images, but slightly faster per-image).",

    "type": "integer",

    "subtype": null,

    "default": 1,

    "min": 1,

    "max": 100,

    "view_min": 0,

    "view_max": 10,

    "step": 1,

    "values": null,

    "value_names": null,

    "examples": null,

    "visible": true,

    "advanced": true,

    "feature_flag": null,

    "toggleable": false,

    "priority": -20,

    "group": {

    "name": "Swarm Internal",

    "id": "swarminternal",

    "toggles": false,

    "open": false,

    "priority": 0,

    "description": "",

    "advanced": true,

    "can_shrink": true

    },

    "always_retain": false,

    "do_not_save": false,

    "do_not_preview": false,

    "view_type": "slider",

    "extra_hidden": false,

    "nonreusable": false,

    "feature_missing": false

    },

    "comfyrawworkflowinputdropdownnegativepromptnodetokennormalizationw": {

    "name": "token_normalization",

    "default": "none",

    "id": "comfyrawworkflowinputdropdownnegativepromptnodetokennormalizationw",

    "type": "dropdown",

    "description": "The token_normalization input for Negative Prompt (Node 22) (dropdown)",

    "values": [

    "none",

    "mean",

    "length",

    "length+mean"

    ],

    "view_type": "normal",

    "min": -9999999999,

    "max": 9999999999,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": -10,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "no_popover": true,

    "group": {

    "name": "Negative Prompt (Node 22)",

    "id": "negativeprompt",

    "open": false,

    "priority": -10,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputdropdownnegativepromptnodeweightinterpretationw": {

    "name": "weight_interpretation",

    "default": "A1111",

    "id": "comfyrawworkflowinputdropdownnegativepromptnodeweightinterpretationw",

    "type": "dropdown",

    "description": "The weight_interpretation input for Negative Prompt (Node 22) (dropdown)",

    "values": [

    "comfy",

    "A1111",

    "compel",

    "comfy++",

    "down_weight"

    ],

    "view_type": "normal",

    "min": -9999999999,

    "max": 9999999999,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": -10,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "no_popover": true,

    "group": {

    "name": "Negative Prompt (Node 22)",

    "id": "negativeprompt",

    "open": false,

    "priority": -10,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputtextnegativepromptnodetextw": {

    "name": "text",

    "default": "child, kid, very young, loli, baby, worst quality, low rating, (signature), (bad hands:2), (bad arms:1.8), (bad legs:2), (more than 2 legs:2), (more than 2 arms:1.6), (background:3), more than 1 navel, (bad face:1.6), dark, (scenary:2), (background imagery:2)",

    "id": "comfyrawworkflowinputtextnegativepromptnodetextw",

    "type": "text",

    "description": "The text input for Negative Prompt (Node 22) (text)",

    "values": null,

    "view_type": "prompt",

    "min": -9999999999,

    "max": 9999999999,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": -10,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "revalueGetter": null,

    "no_popover": true,

    "group": {

    "name": "Negative Prompt (Node 22)",

    "id": "negativeprompt",

    "open": false,

    "priority": -10,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputdropdownpositivepromptnodetokennormalizationv": {

    "name": "token_normalization",

    "default": "none",

    "id": "comfyrawworkflowinputdropdownpositivepromptnodetokennormalizationv",

    "type": "dropdown",

    "description": "The token_normalization input for Positive Prompt (Node 21) (dropdown)",

    "values": [

    "none",

    "mean",

    "length",

    "length+mean"

    ],

    "view_type": "normal",

    "min": -9999999999,

    "max": 9999999999,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": -10,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "no_popover": true,

    "group": {

    "name": "Positive Prompt (Node 21)",

    "id": "positiveprompt",

    "open": false,

    "priority": -10,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputdropdownpositivepromptnodeweightinterpretationv": {

    "name": "weight_interpretation",

    "default": "A1111",

    "id": "comfyrawworkflowinputdropdownpositivepromptnodeweightinterpretationv",

    "type": "dropdown",

    "description": "The weight_interpretation input for Positive Prompt (Node 21) (dropdown)",

    "values": [

    "comfy",

    "A1111",

    "compel",

    "comfy++",

    "down_weight"

    ],

    "view_type": "normal",

    "min": -9999999999,

    "max": 9999999999,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": -10,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "no_popover": true,

    "group": {

    "name": "Positive Prompt (Node 21)",

    "id": "positiveprompt",

    "open": false,

    "priority": -10,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputtextpositivepromptnodetextv": {

    "name": "text",

    "default": "best quality, high rating, solo, (perfect anatomy:1.6), photorealistic, detailed, gorgeous face, gorgeous brown eyes, blue-black hair, long hair, Japanese, pale makeup, opulent silk kimono, patterned kimono, hyper obese, geisha, saggy fat breasts, breasts on belly, cellulite, belly bursting out of clothes, bellyheavy, (three fold navel:1.2), fat folds, immobile belly, cleavage, (anthro, pig girl:1.4), shocked, hands in the air",

    "id": "comfyrawworkflowinputtextpositivepromptnodetextv",

    "type": "text",

    "description": "The text input for Positive Prompt (Node 21) (text)",

    "values": null,

    "view_type": "prompt",

    "min": -9999999999,

    "max": 9999999999,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": -10,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "revalueGetter": null,

    "no_popover": true,

    "group": {

    "name": "Positive Prompt (Node 21)",

    "id": "positiveprompt",

    "open": false,

    "priority": -10,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputdecimalksamplernodedenoised": {

    "name": "denoise",

    "default": 1,

    "id": "comfyrawworkflowinputdecimalksamplernodedenoised",

    "type": "decimal",

    "description": "The denoise input for KSampler (Node 3) (decimal)",

    "values": null,

    "view_type": "slider",

    "min": 0,

    "max": 1,

    "step": 0.05,

    "visible": true,

    "toggleable": true,

    "priority": -5,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "revalueGetter": null,

    "no_popover": true,

    "group": {

    "name": "KSampler (Node 3)",

    "id": "ksampler",

    "open": false,

    "priority": -5,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputintegerclipsetlastlayernodestopatcliplayerq": {

    "name": "stop_at_clip_layer",

    "default": -2,

    "id": "comfyrawworkflowinputintegerclipsetlastlayernodestopatcliplayerq",

    "type": "integer",

    "description": "The stop_at_clip_layer input for CLIPSetLastLayer (Node 16) (integer)",

    "values": null,

    "view_type": "big",

    "min": -24,

    "max": -1,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": 0,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "revalueGetter": null,

    "no_popover": true,

    "group": {

    "name": "CLIPSetLastLayer (Node 16)",

    "id": "clipsetlastlayer",

    "open": false,

    "priority": 0,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputdropdownmodelsamplingdiscretenodesamplingo": {

    "name": "sampling",

    "default": "v_prediction",

    "id": "comfyrawworkflowinputdropdownmodelsamplingdiscretenodesamplingo",

    "type": "dropdown",

    "description": "The sampling input for ModelSamplingDiscrete (Node 14) (dropdown)",

    "values": [

    "eps",

    "v_prediction",

    "lcm",

    "x0"

    ],

    "view_type": "normal",

    "min": -9999999999,

    "max": 9999999999,

    "step": 1,

    "visible": true,

    "toggleable": true,

    "priority": 0,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "no_popover": true,

    "group": {

    "name": "ModelSamplingDiscrete (Node 14)",

    "id": "modelsamplingdiscrete",

    "open": false,

    "priority": 0,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    },

    "comfyrawworkflowinputdecimalrescalecfgnodemultiplierp": {

    "name": "multiplier",

    "default": 0.8,

    "id": "comfyrawworkflowinputdecimalrescalecfgnodemultiplierp",

    "type": "decimal",

    "description": "The multiplier input for RescaleCFG (Node 15) (decimal)",

    "values": null,

    "view_type": "slider",

    "min": 0,

    "max": 1,

    "step": 0.01,

    "visible": true,

    "toggleable": true,

    "priority": 0,

    "advanced": false,

    "feature_flag": null,

    "do_not_save": false,

    "revalueGetter": null,

    "no_popover": true,

    "group": {

    "name": "RescaleCFG (Node 15)",

    "id": "rescalecfg",

    "open": false,

    "priority": 0,

    "advanced": true,

    "can_shrink": true,

    "toggles": false

    }

    }

    },

    "param_values": {

    "seed": 632559746,

    "steps": 40,

    "sampler": "heunpp2",

    "scheduler": "ddim_uniform",

    "cfgscale": 8,

    "model": "hyperfusionVpred_v9Vpred",

    "width": 1024,

    "height": 1024,

    "batchsize": 1,

    "aspectratio": "Custom"

    },

    "image": "/imgs/model_placeholder.jpg",

    "description": "",

    "enable_in_simple": false

    }

    throwawayjm
    Author
    Mar 21, 2025

    I forgot that I had an old workflow from someone else so I uploaded it here https://civitai.com/images/64978187

    z5418lyDec 21, 2024
    CivitAI

    Cam you make This model's lora?

    throwawayjm
    Author
    Dec 22, 2024

    I could, but I wouldn't work with any other models. I tried it earlier on with some other common SD1 vpred models, but my training args must have been different enough to make them incompatible. Kohya-sd_scripts has a LoRA extraction script if you really want to try.

    prcprc676Jan 7, 2025· 1 reaction
    CivitAI

    I tried installing hyperfusionVpred_v9Vpred.safetensors
    on
    stable-diffusion-webui-forge

    I used:

    Sampling method:
    Euler a , DPM++ 2M

    Clip skip:2

    VAE / Text Encoder:
    vae-ft-mse-840000-ema-pruned.ckpt

    Schedule type:
    normal ,simple ,Uniform ,DDIM

    only 768x1024

    CFGRescale Integrated: 0.7


    I only get blurry images of some dots ,
    I use the forge UI as it it's the most optimesed on my GTX 1070

    HyperFusion v9 final model can it run on FORGE UI?

    throwawayjm
    Author
    Jan 8, 2025

    Did you include the .yaml file in the model dir?

    jaimdMar 21, 2025

    @throwawayjm  where i can download that yaml file?

    throwawayjm
    Author
    Mar 21, 2025

    @jaimd its in the downloads, under Config

    602539Jan 14, 2025
    CivitAI

    Can't do coherent images with the model at all no matter what I try. Always witches or hags with mangled proportions as if the negatives are positives. It's like the previous one. Lora version much better. Keep it up with the Vpred lora though.

    throwawayjm
    Author
    Jan 15, 2025

    What gui are you using? Only really tested it in A1111, but it should work in Comfy and forge as well.

    602539Jan 19, 2025

    @throwawayjm SD ReForge. I always have issues using Hyperfusion models BTW. The loras are much better and easier to use

    jaimdMar 21, 2025
    CivitAI

    can anyone explain me how i can use this? i need vpred to make it work right? i am in confyui how i can make it work? dont know how i can isntal vpred and add the node

    jaimdMar 21, 2025

    would you share the workflow for confyui? i just cant make it work

    throwawayjm
    Author
    Mar 21, 2025

    @jaimd I dug up an ancient comfyui example, but I don't really use comfyUI so hopefully it still works. Posted it in the gallery below. https://civitai.com/images/64978187

    MechaMaxZillaJun 11, 2025
    CivitAI

    Is it at all possible that a PONY or XL version can be made?

    Checkpoint
    SD 1.5

    Details

    Downloads
    1,092
    Platform
    CivitAI
    Platform Status
    Available
    Created
    12/15/2024
    Updated
    5/17/2026
    Deleted
    -

    Files

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.