Animates videos in a stylized physically based rendering animation aesthetic, comparable to the kind seen in animated films.
The primary trigger word for this style is "PBRSty1e." Supporting tags aren't required if your initial image is in the style, but some tags that may aid in quality for edge cases are "3d animation, shiny skin, realistic style"
California AB 2013 Training Data Disclosure
This LoRA was fine-tuned using visual data consisting primarily of still frames extracted from animated films, along with a smaller amount of publicly available fan-created renderings and AI-generated images. The training data also includes frames extracted from video clips, primarily sourced from animated films, with a limited number derived from fan-created renderings. The training data includes copyrighted material owned by third parties, including film studios and individual artists. No training data was licensed or purchased. This LoRA is provided for non-commercial use only under the terms of its distribution.
The dataset consists of approximately 519 images and 51 video clips collected from publicly accessible sources collected in 2025. Data was processed through standard resizing, cropping, normalization, frame extraction, and labeling steps. Synthetic images were included as part of the training dataset.
This model is intended for non-commercial, experimental, and educational use. Generated outputs may reflect copyrighted visual styles or themes associated with the underlying training data. Users are responsible for ensuring compliance with applicable copyright law, other intellectual property laws, and all other applicable laws.
Description
FAQ
Comments (12)
High model uploaded twice? Also thanks was looking for something like this.
Thank you for catching that! Yes, I mistakenly uploaded the high model twice on that listing. I've now removed the extra.
I really loved this LoRA for WAN and used it A LOT. Any chance we could get one for LTX-2?
I've already begun the process :) Right now I'm determining the optimal training methods for LTX2 on my hardware, but I've already re-normalized the video dataset for 24 fps and intend to make an LTX2 once I get my training settings figured out.
Please share your workflow made this with LoRA. I have been looking workflow Wan2.2 with LoRA, but nothing seemed working for me, and gave me this quality result, and I really want to try this to test my LoRA. Thanks in advance!
I'd be happy to help! What component are you hoping to gain some insight on? Dataset, trainer config, etc?
@ReltivlyObjectv Thanks for the reply! I really love how that Pixar-looking character turned out—it’s exactly the vibe I’m trying to hit. I’m a bit stuck on how to properly get my own character LoRA working with Wan 2.2 14B like yours. Where are you actually putting the LoRA node in your I2V workflow to keep everything so consistent? Also, for my training, I only have static images like character sheets with different angles and lighting rather than video clips. Do you think just images are enough to keep the character's look stable when they move, or would adding video make a big difference? I'd love to know any tips on your trainer settings or just how you got that clean Pixar finish without the face getting all messy or blurry. Honestly, if you're open to sharing the workflow file you used, that would be even better and super helpful! Thanks!
@blsinhvn Happy to! :) For reference, this pastebin is the training config for this model: https://pastebin.com/14xtnJT5
The most notable aspects of the config I changed are that I was using linear rank 16, conv 8, and used both images and clips.
Something that isn't visible in the config but is very important is that all video clips were normalized to 18 FPS and trimmed down to the context length used for training; if you don't do these you can respectively see the clips move too rapidly/slowly or poorly adhere to prompts; this can be done with either FFMPEG or using VHS in ComfyUI. The images are the most important part, since they're a low-intensity, an easily compiled way to teach the model what its output should look like, and easy to add high amounts of diversity. The clips are important to include for anything that really presents itself during motion in the desired style (in this instance, the way mouths move when they talk and things like over-dramatized arm movement). If you don't include clips, the model is liable to default to realistic movement and potentially bleed realism if it can tell something should be different but doesn't quite know how, which results in blur or style mixing.
For use in actual ComfyUI, I include the LoRA nodes after the base model and CLIP but before the 4-step lightning LoRA: BaseModel/CLIP -> Style LoRA -> 4-step LoRA (for the model) -> KSampler/TextEncoder
For preventing blurry faces, I've run into a similar problem a number of times, and you can see it manifest in the V1 of my Anime Style LoRA; the two biggest helpers I've found are to include clips and increase dataset diversity. These are both two sides of the same coin, because the model will more or less guess randomly if it has no direction on how to approach things. You want a variety of camera angles, character appearances, body types, etc. to ensure that it grasps the different ways to draw people (for example, if you only train on thin women, large men will be almost entirely unmapped territory, leading to instability; the same is true for camera angles, so if you never show a side profile and only have data where they're looking directly at the camera, you'll get some instability when they turn their heads, and if you never include clips where the camera moves or rotates, you'll get background blur when the camera moves). In addition to teaching motion stylization, including clips can also help overcome this diversity problem by helping to stabilize drawing new things that are initially out of view, reducing the likelihood that it's unstable and blurry when it doesn't have a solid anchor.
@ReltivlyObjectv
Hey! First off, I’m really sorry for the late reply. I’ve been deep in the 'cooking' phase with all the great info you shared!
Thank you so much for the Pastebin link and the detailed breakdown of your training config. It’s incredibly helpful. Based on your advice, I’m currently rendering out a diverse set of character sheets and prepping those 18 FPS video clips to ensure the motion stays consistent.
Just to clarify my project, I’m actually aiming for a Hybrid Clay 2.5D animation look rather than a standard Pixar style. I'm trying to capture that tactile, stop-motion feel while keeping it fluid.
I also had a quick follow-up regarding the ComfyUI node logic: You mentioned placing the Style LoRA before the 4-step Lightning LoRA in the chain. Is the reasoning behind this to ensure the model prioritizes the stylistic weights before the 'distillation' effect of the Lightning LoRA kicks in? I'm curious if this specific sequence is what prevents the style from getting 'washed out' during the fast sampling process.
I’ll definitely share the results with you once I get everything dialed in! Thanks again for being so open with your workflow.
@blsinhvn No worries at all and happy to hear it helped! In regards to the LoRA order, the weight adjustments are applied in-order to the base model. The first LoRA only modifies the base model, but the second LoRA modifies the outputs of the first one's modifications. The lightning LoRA makes quite a few changes to how attention and distillation is performed. If the Lightning LoRA is last, you essentially guarantee that it will generate a video reliably in four steps, but the rapid generation may come at the cost of your style LoRA's effectiveness. If you put the lightning LoRA before the style LoRA, you risk the lightning LoRA not being as effective and not requiring only four steps as reliably. The first model has the lowest priority and therefore the highest chance to not be able to shine.
As a metaphor, think of it as an essay or article with five different review editors; the style of the last editor is the one that received no further revisions, so their influence and personal style/preference is significantly more noticeable than the second person to edit it, who has their changes tweaked by multiple other people.
With all of this in mind, my personal experience is that the lightning LoRA is effective enough that it can still do it's job when included in the chain first, but other limited-use LoRAs often aren't resilient or effective enough to shine through once the lightning LoRA does its job. I generally only use 1-2 LoRAs at a time, so you may need to put the Lightning LoRA last in the event you're using a style LoRA, character LoRA, concept LoRA all in one generation (or at least increase step count to compensate for it not working as effectively).
@ReltivlyObjectv Thanks for the detail explanation, the metaphore was really helpful! And, just a quick ask - would it be okay If I add you on Discord? I'd love to send you updates as things come together and share the LoRA once it's done. I'll DM you my discord ID! Thanks.
@blsinhvn Happy to help! And sure!