CivArchive
    Detailz-Wan: Detail Enhancer for WAN Videos - v1.0 Detailz-WAN
    NSFW

    Detailz-WAN

    Enhance the details in your WAN video generations

    Detailz-WAN was designed to help with your video generations using the WAN models. It was primarily focused on improving the details of people (skin, faces, etc.) but can also add detail to items and backgrounds. It also seems to help add depth to colors and lighting.

    Detailz was trained 704 high definition photographs of mostly people in natural surroundings. This was a selected subset of some training images I have to push WAN for better details. It was designed with the TEXT to VIDEO model in mind. Testing on IMAGE to VIDEO shows good results too. It tends to add details and 'depth' without deviating from the input image.

    IT JUST WORKS!! Try it. Please post some examples and give a thumbs up if you use it and like it.

    Note: This was not a NSFW trained model. The images used were all SFW. HOWEVER, it works well in conjunction with other available NSFW enhancing LoRA's. It helps keep the subjects more realistic. So for improved NSFW results, combine with a NSFW LoRA from one of the other amazing creators on here.

    All of the sample images were made in COMFYUI using the default workflow with a simple upscale step added at the end using Remacri. They are all TEXT to VIDEO generated. Most were random prompted. I will add some better prompted videos over time, but I was rushing to get this ready to publish so I could move on to enjoying it more. Also, it looks like the format I saved to doesnt do video previews so you have to open the showcase images for them to move... frustrating and I will try to replace them soon.

    Use:

    • Designed for WAN T2V Model

    • Trained on 14B T2V model

    • Strength of 0.75-1.0

    • No trigger word required but was trained on word "detailz"

    • Trained in AI-Toolkit on still images only

    Description

    Over 14000 steps on 704 high definition images to provide added details to your WAN creations.

    FAQ

    Comments (60)

    altolstyh386Mar 22, 2025
    CivitAI

    Is there any way to remove that trigger word? Which is completely unnecessary here.

    Vision_AI_ry
    Author
    Mar 22, 2025· 2 reactions

    You don't have to use it at all. I did not use it when testing.

    Vision_AI_ry
    Author
    Mar 22, 2025· 2 reactions

    I went in and removed it from the CIVITAI trigger word setting. Just in case that was causing you the issue.

    Vision_AI_ry
    Author
    Mar 22, 2025· 10 reactions
    CivitAI

    Not sure why showcase movies appear as static images unless you click on them. Civitai having issues though because posts are not showing up and the suggested resources keep appearing and disappearing on here. Hoping the site catches up and all this corrects itself. I do not want to have to upload again.

    crombobularMar 22, 2025

    yeah the site is taking a shit at the moment again. you're prob fine, it should fix itself at some point. comments are also 50/50 atm on if they show up or not

    Vision_AI_ry
    Author
    Mar 22, 2025· 1 reaction

    @crombobular Yeah its just frustrating when you go to launch a LoRA you been working on all week and nothing works on the site. By the time they fix it, you are buried in the list of new releases and your LoRA never gets a fair chance.

    vslinxMar 23, 2025

    @Vision_AI_ry have the same issue man, every time i drop a LoRA the website starts crashing and they throw it into maintenance mode shortly after 😂

    Sometimes you can create a bounty and give away some buzz to people creating videos with your LoRA, really helps boosting some visibility + you get some buzz back from people downloading and liking your content!

    Vision_AI_ry
    Author
    Mar 23, 2025· 1 reaction

    @vslinx Yeah I have participated in some of those. I just put them on here and people who like them can go at it. Not really doing it for buzz anyway. I just wish the video previews worked though because generating that many vids took hours and frustrating when all you see is screen grab looking like mediocre FLUX gens lol.

    zym0xMar 23, 2025· 1 reaction

    CivitAi is just broken right now, it shouldn't normally be like this.

    HearmemanAIMar 22, 2025
    CivitAI

    Nice work!
    I haven't used it yet, but I really like the example videos you posted.

    helloworld45Mar 22, 2025
    CivitAI

    Trained with 1.3B or 14B model ?

    Vision_AI_ry
    Author
    Mar 22, 2025· 1 reaction

    14B model. Good point I should put that in the notes.

    helloworld45Mar 23, 2025

    @Vision_AI_ry works great, with i2v too : )

    Vision_AI_ry
    Author
    Mar 23, 2025

    @helloworld45 Glad you have good results in I2V too. I assumed it would, but you never know. And videos take a lot of time for testing purposes.

    helloworld45Mar 23, 2025

    @Vision_AI_ry i can't say if it enhances the result yet but it do does something as the result is not the same with or without the lora for a same seed

    azeliMar 24, 2025· 1 reaction

    @Vision_AI_ry nobody should be training 1.3!

    Vision_AI_ry
    Author
    Mar 24, 2025

    @azeli I never judge. There may be some people training on it because their hardware can handle it and not handle 14B. Anyway, I clarified this was trained on 14B.

    TurboCoomerMay 19, 2025· 1 reaction

    @azeli everybody should be training 1.3

    xG00N3RxMar 23, 2025
    CivitAI

    Does this have any sort of positive effect on I2V? Thank you!

    Vision_AI_ry
    Author
    Mar 23, 2025

    I have not tested it with I2V. It will definitely have an effect but it may alter the appearance?? I will try to test it more later today.

    Vision_AI_ry
    Author
    Mar 23, 2025

    Another user in the comments said that it did have good results on I2V. I still have not had a chance to test it.

    3427221Mar 24, 2025

    I think it allow you to put a bit less steps while maintaining decent face details yes, so I would say good for I2V too

    3427221Mar 24, 2025· 9 reactions
    CivitAI

    I think this lora is MUST HAVE 100% recommended for anyone looking to reduce the steps needed to make decent looking output with I2V models, it's night and day type of difference, I usually get noisy ghosting output in realistic style video unless I put 30 or more steps (often more 50 or more than 30...) and with this lora activated, I can go as low as 18-20 steps to get often BETTER result than with 30+ steps, I don't know what you did in the training but it worked amazing, keep it up, if you can identify what is the reason for this, don't hesitate to do a V2 :p

    Vision_AI_ry
    Author
    Mar 24, 2025· 4 reactions

    I am glad you are liking it. I think it comes down to image selection for training. I have a training set I collected over the past year from open source that is high definition and high quality. I used a large sample because I did not want the LoRA to over influence the main subjects in an image. I might try cutting the samples by half (keeping the top 50% of course) and see if that makes noticeable improvement without over influencing the original prompt (or image). For now, I am pleased with how this one balanced out though.

    CharlieBrown0115Jun 29, 2025

    Can you show a before and after when using this Lora, I'm generating image2videos with some images and I'm getting like cartoon skin or anime skin when using it on real ppl

    Ass51Mar 25, 2025· 2 reactions
    CivitAI

    Great! If any chance, would you train a 1.3B t2V high-definition LORA using hundreds of high-resolution photos or videos, so we can use a fast 1.3b small T2V model to obtain image quality results that surpass those obtained by the 14b T2V model.Thanx!:)

    ginxMar 29, 2025

    this please! specially with the new 1.3 i2v model

    ToxicBotApr 4, 2025

    @ginx I can not find anything about a Wan 1.3B I2V model, the official hugging face doesn't have any mention of it and I am not finding anything with some quick searches. Where do you see this?

    adadsqeApr 16, 2025

    @ToxicBot There is the 1.3B FUN inP model. I couldn't get it to run with acceptable results for i2v but that might just been me messing up the setup.

    Silver_bullet666Apr 15, 2025· 2 reactions
    CivitAI

    This lora fixed some compression artifacts in my WAN generations, much thanks sir

    effyaiaiApr 16, 2025
    CivitAI

    does anyone have a video on how to make videos?

    just downloaded stable d

    BSMITH044076Apr 27, 2025

    ComfyUI auto installer WAN2.1 | GGUF | UPSCALE, I have downloaded everything and thanxs to UmeAiRT working first time for me.One file is missing You gonna have to download and place it manually.

    jargomanApr 28, 2025

    wan 2.1 - best quality
    hunyuan - ok quality but hard to control.
    ltxv - The worst quality wise of the three but so fast and flexible that I like it.
    cogvideo - cogvideo isn't that bad but it's really hit or miss.

    If you're going for the best then wan2.1 is still king. Wan is psychic or something.

    sarashinaiMay 10, 2025
    CivitAI

    Would you post a video or image with workflow embedded? I'd really like to see the specifics that you used.

    Vision_AI_ry
    Author
    May 10, 2025· 1 reaction

    https://civitai.com/posts/16727312 You should be able to get it off this one. It is nothing special. Everything up to the first save node was someone else's work. I adjusted a few of the settings to my likings. Then i added some upscale nodes and post image processing nodes. I sometimes use Tea Cache (bypassed in example). I normally use Foolhardy-Remacri as my upscaler - but I had just started experimenting with ClearRealityV1. Hope this helps. It is not very clean, but you can see the functionality.

    flayspotifybot2119May 19, 2025
    CivitAI

    Would this work on 1.3B model?

    TurboCoomerMay 19, 2025

    no

    wic996May 22, 2025· 3 reactions
    CivitAI

    is it working with i2v?

    Vision_AI_ry
    Author
    May 23, 2025

    Yes. I use it on I2V almost every time. The only time I dont is for some anime images of non realistic characters (non human).

    Karlmeister_ARJun 1, 2025

    +1 As @Vision_AI_ry says. It works pretty nice with I2V. Not recommended with non realistic images or humanoids with non human-standard features, like pointy ears (tends to plain them), black sclera (tends to make them white) or horns (same as pointy ears.).

    panjoolJun 29, 2025

    @Vision_AI_ry for i2v what strength you use to not change the reference image face.

    Vision_AI_ry
    Author
    Jun 29, 2025· 1 reaction

    @panjool If you are trying to keep a distinct likeness, I would keep it below .5. Sometimes it depends on the person's face. This was designed more with T2V in mind, but I use it on my I2V also 90% of the time.

    keraloDec 9, 2025

    great question and what about the high and low settings? Do I just put it on both nodes?

    Vision_AI_ry
    Author
    Dec 9, 2025· 2 reactions

    @keralo This was designed and trained on Wan1 many months ago. Some people have said it works on Wan2.2, but I have not tested it (been busy with other areas of life recently). I would recommend starting with it just on the Low pass and see how it performs though - but that is just a guess. Some other user may have tested it and respond.

    QuantumGenesisJul 2, 2025· 2 reactions
    CivitAI

    Tried on WanGP Fusionx with Lora power 1 but am seeing no difference 520p vids? Is this normal? Was trying to fix inconsistent eye detailing in some video to video outputs.

    Vision_AI_ry
    Author
    Jul 2, 2025

    I have not tested it extensively with WanGP FusionX. It was trained on normal Wan 14B. I have used it some with FusionX t2v with good results - but have not done a lot of comparisons or tried in any other FusionX versions.

    QuantumGenesisJul 2, 2025

    Ok I’ll keep testing thanks

    ClocksmithJul 6, 2025· 1 reaction

    I tested with the LoRA version of FusionX and also saw no noticeable increase in quality

    AbsoluteBussinJul 11, 2025
    CivitAI

    With the T2V 14B Causvid version it effectively doubles generation time at 10% strength...

    Vision_AI_ry
    Author
    Jul 11, 2025· 1 reaction

    This was trained on standard Wan 2.1 T2V 14B right after it was released. I am sure it is less than optimal on many of the modified WAN models that have come out since. I use it on FusionX without issue, but I have not tested it extensively in other variants. For now, I have no intentions of training it on other versions until either a new mod becomes the 'standard' or my interests move me to a specific version.

    Vision_AI_ry
    Author
    Jul 11, 2025· 1 reaction

    Also, I assume the extra LoRA size is not just overloading your VRAM into swap? That could slow it down too. Just a thought, but it could easily be the LoRA as I have not dove into how Causvid works.

    azeliJul 14, 2025
    CivitAI

    seems to stop character loras from working, any way around that?

    Vision_AI_ry
    Author
    Jul 14, 2025· 1 reaction

    I have been using this with character LoRA's on my end without issue. But if you are having problems, you may have to either not use it or just try adding it at a minimal strength (.25 or so). I am sure it has some impact depending on how strong your character LoRA is trained. This was trained on a large image batch to try to avoid any bias to any one person or look - but that does not mean it could not have picked up some bias anyway.

    NikFJul 17, 2025
    CivitAI

    Really nooby question here so apologies.

    I'm new to WAN but not comfyui. I'm assuming incorrectly that Loras are placed in the Loras folder here .\COMFYUI\models\Loras\?

    It's just my workflow can't seem to find them there.

    Vision_AI_ry
    Author
    Jul 17, 2025· 1 reaction

    It should find them there - that is the correct place. Did you try doing EDIT-->Refresh Node Definitions. If you downloaded them after you started COMFYUI, they do not show up until you refresh or restart COMFY.

    4326369Aug 19, 2025· 4 reactions
    CivitAI

    Does this work on wan 2.2 tried it it makes no difference

    Vision_AI_ry
    Author
    Aug 22, 2025

    No. I may train a version for it but I have not had time yet. And not sure if it will be as impactful on 2.2

    ThisAIThingOct 24, 2025· 1 reaction

    I found that the WAN 2.1 loras need a higher strength on WAN 2.2. I found this lora works on the low steps and it provided some additional details. Details strength of 2, and used a lower lightning lora strength of around 0.6 with a couple extra steps the details start showing.

    IMV521Oct 24, 2025· 1 reaction
    CivitAI

    Does this chain up to high pass, low pass, or both? Thanks!

    st3rb3nOct 29, 2025
    CivitAI

    Hi! Amazing job on this LoRA — the color richness and depth it brings to WAN videos are impressive.

    If possible, could you please share your dataset? I’d like to retrain or adapt it for WAN 2.2