CivArchive
    Preview 88832482
    Preview 73883456
    Preview 73867052
    Preview 73863912
    Preview 73866948
    Preview 73866960
    Preview 73863924
    Preview 73866979
    Preview 73866998
    Preview 73867007
    Preview 73867017
    Preview 73867109
    Preview 73867142
    Preview 73867158
    Preview 73867159
    Preview 73867183
    Preview 73867207
    Preview 73867246
    Preview 73867223
    Preview 73867205

    Elsa_Frozen1_qwen2512_V1:

    For the first time, I’m using only assets from Frozen1—was a bit hesitant, but decided to go ahead and post it anyway.

    Qwen-Image:

    Qwen-image is definitely another leap forward, like "SDXL" all over again. Seriously, if you've got the cash or the hardware, you gotta try fine-tuning this thing!!!

    If there's a model out there that's gonna spark the next "Pony" craze, Qwen’s got a real shot!

    Just check out the detail and how accurate the outfits are!

    Qwen-image actually learned something! It's practically movie-quality.

    The last model that impressed me this much with its learning ability was hunyuanvideo. But its image quality wasn't as good as Wan’s.

    Qwen-image nails both aspects, though.

    (Still think hunyuan reigns supreme for consistent character likeness, though – think of it like a 99 vs. a 95 compared to Qwen).

    To be blunt, the real value here is something only a skilled trainer would understand.

    Wan2.2_9-outfit (highnoise+lownoise):

    I used the same dataset, but this time I beefed up the training captions—same problem as Wan 2.1: clothing variations still don’t stick well. Any improvement I’m seeing is more about the cleaner dataset than any model upgrade. During testing I also noticed Wan 2.2 images come out slightly softer; that’s a side-effect of the KSampler (Advanced) “start/end at step” trick.

    The Low-noise checkpoint of Wan2.2_T2V_14B and the vanilla Wan2.1_T2V_14B checkpoint share a lot of weight, so the LoRAs are pretty much cross-compatible.(Turns out, the Wan 2.2 high-noise checkpoint didn't actually need the step_distill Lora at all. What really made a difference was the low-noise checkpoint's step_distill Lora.)

    Wan2.1_9-outfit:

    I had no plans to release this model. It was trained before last month, but since it didn't turn out as I hoped, I never thought anyone would even care.
    I forgot to make the tag TXT for this version

    HiDream:

    Amazing! HiDream feels like the next version of Flux—it's easy to train and captures details brilliantly! Although some instability in appearance still exists, that doesn't overshadow its performance.

    Unfortunately, running HiDream is extremely demanding on hardware. It has three versions, and even the Fast version is still quite slow for me.

    Plus, the pre-training preparations were a real pain. This LoRA is just for testing, so it's not optimized for the best performance, and the training dataset was incomplete (for comparative experiments)."

    I think this could be one of the next generation models we can expect!

    Detailed introduction here: https://comfyui-wiki.com/en/tutorial/advanced/image/hidream/i1-t2i

    Wan2.1-14B (T2V)

    I stopped the training too early without saving checkpoints - it would've performed better if continued. But this version should still be good enough for us to evaluate Wan2.1-14B's quality. Hope I'm not too late sharing this. The reason I avoided training 14B before was its massive weight files and painfully slow testing - that's why I only uploaded images initially. Did you know technically they treat images as 1-frame videos? Even with dual 4090X2 on cloud, it runs at 3 seconds per step (vs HunyuanVideo's 1 sec/step).

    During testing, I noticed two key traits about 14B:

    1. It's much more resistant to overtraining than other models.

    2. Its output is cleaner/less noisy than HunyuanVideo's."

    Wan2.1-1.3B

    All these examples were generated using wan2.1-1.3B, and the training was done with the official 1.3B weighted model. I know, you're probably wondering why there are so many Elsa Lora. She's kind of my go-to character for testing new models – there are some other reasons too, both personal and technical, but I doubt you'd be interested in those.

    Anyway, the point is Hunyuan is generally better than wan at picking up on a character's face and clothes from the training images. It usually does a pretty good job with T2V (text-to-video).

    Wan is used more for I2V (image-to-video).

    Flux-Elsa in winter dress

    I realized that Flux's Lora doesn't work well with multiple sets of Elsa's outfits, so I tried training a set separately. However, the result wasn't as good as I expected. Flux is confusing me—there's something that's holding back the character's resemblance.

    Flux-testThis might be a Civitai platform issue - the updated version I uploaded returned a 404 error (likely lost during update).

    Welcome to test this Flux dev model, I may delete it after a certain time.

    It was such a crude attempt that I released the final model without having time to test it, in order to use civitai's online generation capabilities

    Description

    FAQ

    LORA
    Wan Video

    Details

    Downloads
    217
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/3/2025
    Updated
    5/13/2026
    Deleted
    -

    Files

    elsa_wan2.1_14B-16.safetensors

    Mirrors

    HuggingFace (1 mirrors)
    TensorFiles (1 mirrors)