CivArchive
    Preview 87047675
    Preview 87038138
    Preview 87050595
    Preview 87047064
    Preview 87037016
    Preview 87037181
    Preview 87037288
    Preview 87037391
    Preview 87037401
    Preview 87035639
    Preview 87037566
    Preview 87047038
    Preview 87048509
    Preview 87048048
    Preview 87048167
    Preview 87048628
    Preview 87051199
    Preview 87402923
    Preview 87059615

    Elsa_Frozen1_qwen2512_V1:

    For the first time, I’m using only assets from Frozen1—was a bit hesitant, but decided to go ahead and post it anyway.

    Qwen-Image:

    Qwen-image is definitely another leap forward, like "SDXL" all over again. Seriously, if you've got the cash or the hardware, you gotta try fine-tuning this thing!!!

    If there's a model out there that's gonna spark the next "Pony" craze, Qwen’s got a real shot!

    Just check out the detail and how accurate the outfits are!

    Qwen-image actually learned something! It's practically movie-quality.

    The last model that impressed me this much with its learning ability was hunyuanvideo. But its image quality wasn't as good as Wan’s.

    Qwen-image nails both aspects, though.

    (Still think hunyuan reigns supreme for consistent character likeness, though – think of it like a 99 vs. a 95 compared to Qwen).

    To be blunt, the real value here is something only a skilled trainer would understand.

    Wan2.2_9-outfit (highnoise+lownoise):

    I used the same dataset, but this time I beefed up the training captions—same problem as Wan 2.1: clothing variations still don’t stick well. Any improvement I’m seeing is more about the cleaner dataset than any model upgrade. During testing I also noticed Wan 2.2 images come out slightly softer; that’s a side-effect of the KSampler (Advanced) “start/end at step” trick.

    The Low-noise checkpoint of Wan2.2_T2V_14B and the vanilla Wan2.1_T2V_14B checkpoint share a lot of weight, so the LoRAs are pretty much cross-compatible.(Turns out, the Wan 2.2 high-noise checkpoint didn't actually need the step_distill Lora at all. What really made a difference was the low-noise checkpoint's step_distill Lora.)

    Wan2.1_9-outfit:

    I had no plans to release this model. It was trained before last month, but since it didn't turn out as I hoped, I never thought anyone would even care.
    I forgot to make the tag TXT for this version

    HiDream:

    Amazing! HiDream feels like the next version of Flux—it's easy to train and captures details brilliantly! Although some instability in appearance still exists, that doesn't overshadow its performance.

    Unfortunately, running HiDream is extremely demanding on hardware. It has three versions, and even the Fast version is still quite slow for me.

    Plus, the pre-training preparations were a real pain. This LoRA is just for testing, so it's not optimized for the best performance, and the training dataset was incomplete (for comparative experiments)."

    I think this could be one of the next generation models we can expect!

    Detailed introduction here: https://comfyui-wiki.com/en/tutorial/advanced/image/hidream/i1-t2i

    Wan2.1-14B (T2V)

    I stopped the training too early without saving checkpoints - it would've performed better if continued. But this version should still be good enough for us to evaluate Wan2.1-14B's quality. Hope I'm not too late sharing this. The reason I avoided training 14B before was its massive weight files and painfully slow testing - that's why I only uploaded images initially. Did you know technically they treat images as 1-frame videos? Even with dual 4090X2 on cloud, it runs at 3 seconds per step (vs HunyuanVideo's 1 sec/step).

    During testing, I noticed two key traits about 14B:

    1. It's much more resistant to overtraining than other models.

    2. Its output is cleaner/less noisy than HunyuanVideo's."

    Wan2.1-1.3B

    All these examples were generated using wan2.1-1.3B, and the training was done with the official 1.3B weighted model. I know, you're probably wondering why there are so many Elsa Lora. She's kind of my go-to character for testing new models – there are some other reasons too, both personal and technical, but I doubt you'd be interested in those.

    Anyway, the point is Hunyuan is generally better than wan at picking up on a character's face and clothes from the training images. It usually does a pretty good job with T2V (text-to-video).

    Wan is used more for I2V (image-to-video).

    Flux-Elsa in winter dress

    I realized that Flux's Lora doesn't work well with multiple sets of Elsa's outfits, so I tried training a set separately. However, the result wasn't as good as I expected. Flux is confusing me—there's something that's holding back the character's resemblance.

    Flux-testThis might be a Civitai platform issue - the updated version I uploaded returned a 404 error (likely lost during update).

    Welcome to test this Flux dev model, I may delete it after a certain time.

    It was such a crude attempt that I released the final model without having time to test it, in order to use civitai's online generation capabilities

    Description

    FAQ

    Comments (1)

    mobdik17378Jul 9, 2025· 1 reaction
    CivitAI

    Is this just a character lora for Wan? Your description isn't describing shit

    LORA
    Wan Video 14B t2v

    Details

    Downloads
    187
    Platform
    CivitAI
    Platform Status
    Available
    Created
    7/8/2025
    Updated
    5/13/2026
    Deleted
    -
    Trigger Words:
    elsa

    Files

    Elsa_wan21_T2V_9outfit_V1-27.safetensors