CivArchive
    360 Degree Rotation (Microwave Rotation Wan2.1 I2V LoRA) - v1.0

    Request a Wan2.1 LoRA on our Discord and we will train and open-source it for free.

    Join our Discord to generate videos of the 360 Degree Rotation LoRA for free.

    Wan2.1 14B I2V 480p v1.0:

    Trained on 30 seconds of video comprised of 12 short clips (each clip captioned separately) of things being rotated 360 degrees. This was trained on the Wan.21 14B I2V 480p model.

    The trigger word is: 'r0t4tion 360 degrees rotation'

    See below for some prompt examples that worked well for me. You can also check the videos I've posted here for the captions that were used to generate them. For each video the input image is just the first frame.

    Recommended Settings:

    LoRA strength = 1.0

    Embedded guidance scale = 6.0

    Flow shift = 5.0

    Here's a link to the Wan2.1 I2V LoRA inference workflow I used to generate these videos: https://huggingface.co/Remade/Squish/blob/main/workflow/wan_img2video_lora_workflow.json

    This is a slight modification to Kijai's version, with the main difference being the addition of the WanVideo Lora Select node, connected to the 'lora' field of the WanVideo Lora Select node. Find Kijai's original workflow here:
    https://github.com/kijai/ComfyUI-WanVideoWrapper/blob/main/example_workflows/wanvideo_480p_I2V_example_02.json

    Prompt Examples:

    The video shows a man seated on a chair. The man and the chair performs a r0t4tion 360 degrees rotation.

    The video features a Pomeranian puppy sitting on a gravel surface, and the puppy undergoes a r0t4tion 360 degrees rotation.

    Let me know if there are any questions, I'll be happy to help!

    Description

    v1.0

    FAQ

    Comments (64)

    Morser331Mar 12, 2025
    CivitAI

    Thanks for making this, camera control in general is super useful

    BbbrrrMar 12, 2025· 6 reactions
    CivitAI

    How many steps did you train? Did you use diffusion-pipe or musubi-tuner?

    Please_ManMar 12, 2025
    CivitAI

    God bless you

    knall08143Mar 12, 2025
    CivitAI

    I notice same video has 5 seconds another 3 seconds Why?

    deep_synthMar 12, 2025· 2 reactions

    I think there are videos with 24fps

    knall08143Mar 13, 2025

    @deep_synth 

    knall08143Mar 13, 2025

    @deep_synth 

    deep_synthMar 13, 2025

    @knall08143 hi

    flash3angMar 12, 2025
    CivitAI

    You should now make a slime effect where something turns into slime and gets pulled apart.

    PositivePossessionsMar 13, 2025· 3 reactions
    CivitAI

    Would it be possible to make a variation of this where the background also rotates?

    huanggouMar 13, 2025· 6 reactions
    CivitAI

    can you make a lora, the subject is moving or walking and the camera rotates 360 around it

    shlakomanMar 13, 2025· 4 reactions
    CivitAI

    It might be useful for image-to-3d. Just need to use photogrammetry with it.

    futureflixMar 14, 2025

    yes, if you use Sora Loop function

    Cosmic_CrafterMar 16, 2025

    i just grabbed it for that!

    PositivePossessionsMar 13, 2025· 2 reactions
    CivitAI

    I noticed a lot of the time I don't get the full 360 degree rotation (usually its about 270 degrees). Does anyone have any tips on how to fix this?

    intlexMar 14, 2025· 1 reaction

    try to set length 121 and 24 fps

    SD_AI_2025Mar 17, 2025

    @intlex by "try to set length to 121 and 24fps"

    121 frames is a loop animation with Hunyuan trained by Tencent. Which has absolutely nothing to do with Wan2.1 trained by Alibaba which is meant to run at 16fps, and has no reason to share this "121" Hunyuan's trick.

    intlexMar 18, 2025

    @SD_AI_2025 but it works.

    Purrification69Mar 14, 2025
    CivitAI

    it performs best when you actually describe what is happening in the image. otherwise the subject spins and twists but not like a "360 microwave" :D

    Testoster1Mar 15, 2025
    CivitAI

    WanVideoSampler

    'NoneType' object is not callable

    how can I fix this? already 2 days and I can't find a fix!

    DigitalZombieAIMar 16, 2025· 2 reactions
    CivitAI

    Doesn't work with native ComfyUI workflow.

    SD_AI_2025Mar 17, 2025· 1 reaction
    CivitAI

    Did videos used for the training had the same frame count ? Because as it is a true 360 only happens by pure luck.

    FlamzMar 17, 2025· 1 reaction
    CivitAI

    Any training code we can look at? I'd love to fine tune this thing further

    DrDRE3Mar 18, 2025
    CivitAI

    So that did work for me, but the character in the image is moving, making some movements, i couldnt make a static 360 of the character, any hints ?

    plkMar 18, 2025· 1 reaction
    CivitAI

    Works decently. I'd like to see different versions, like one for rotating vertically, and one for a full 360 camera movement where the subject is stationary. If you look up the term "stereograph" or "wigglegram", that would be a decent style too (basically it's just 2 still images used to quickly tilt the view to give a perception of depth). That would pretty much fill in the general possibilities. I think some training could be added that also considers out-of-frame elements, because currently it doesn't really know what to do with subjects that are partially clipped from the image.

    vykqMar 19, 2025
    CivitAI

    I need help with this... I get an error on "Load WanVideo T5 TextEncoder" Node, my settings are:
    model_name : umt5-xxl-enc-bf16.safetensors
    precision : fp16
    load_device : main_device
    quantization : disabled

    zoom83Mar 20, 2025· 2 reactions
    CivitAI

    My best wan lora so far.

    720p i2v variant would be nice. (Upscaled 480p Looks not so realistic)

    Mr_fiveMar 21, 2025· 6 reactions
    CivitAI

    Hey everyone 👋

    It does a really cool job generating 360° rotation videos from a single image — but I’ve hit a wall and hoping someone here might have some insight.

    The Issue:

    Since it only uses one input image (usually the front view), the AI doesn’t actually know what the back of the statue looks like. So, it just “hallucinates” the backside, often in a way that looks totally different from the actual statue. In my case, the result is quite far off from the real design.

    What I’ve Tried:

    • I provided a clean front image of the statue.

    • The generated rotation video looks great from the front and sides, but once it gets to the back, it’s a different design entirely.

    What I’m Hoping to Achieve:

    1. Is there a way to provide more than one image (like front + back) so the model understands what the full 3D object looks like?

    2. Would training a custom LoRA of this statue help? For example, feeding it multiple angles (front, side, back) so that it learns the actual geometry and doesn’t make stuff up?

    Fauna9680Mar 28, 2025

    I'm interested in this. Any workaround?

    sdferfxApr 6, 2025
    Ash0ka74Apr 8, 2025

    https://hyper3d.ai/

    With this model, you can feed multiple images, point-clouds, voxels, bounding boxes

    swaglordrtzMar 22, 2025· 10 reactions
    CivitAI

    has anyone thought of using the frames of these outputs to train loras actually seems incredibly useful

    Le_FourbeApr 5, 2025

    Yes i did THOUGH of. It's in my todo list.

    However i would like to train one myself since it's a very small 30sec dataset this guy used.

    Pixart AI have such tool and it is good

    ... But censored and paid

    karunodragon409Jun 30, 2025

    @Le_FourbeHaven't been able to get pixart ai to work yet so far.

    DouglasGaiaMar 22, 2025
    CivitAI

    Could you please provide this example prompt for testing?

    AlgrJun 7, 2025· 1 reaction

    I'm getting good results with just this:

    r0t4tion 360 degrees rotation

    The camera circles around her.

    She looks to her left and right

    ZergkoolJun 15, 2025· 1 reaction

    @Algr That did the trick, ty!

    endersshadow20484Apr 5, 2025· 4 reactions
    CivitAI

    Amazing LoRA dude, and you did this on 30 seconds of videos? You mind sharing some more info? Any images, # of repeats, resolution, # of frames to process (chunk 17/33/49, etc.). Would love to get some training advice since there is so little out there for WAN.

    LazmanApr 5, 2025· 20 reactions
    CivitAI

    Edit: before downvoting this, read my most recent comments near the bottom. PS: stop downvoting people just because you're too numb in the head to take the time to see where they're coming from.

    For as basic as video loras are at this point, how is this one of the top downloaded? No offence intended, but concepts don't really get much more basic than turning on the spot.. It's not a hit on the model, or the uploader, just curious how there's DBZ action loras, and this is the most downloaded, are people(in general) just really boring, or what?

    Edit: Ok, I just got my first 3D printer, and now my eyes are fully open. This probably is the most practical one on here. The key is that it's not necessarily for making amazing videos, but for it's practical use in contrast to the others. Since most consumer hardware can't even make videos with any practical use as videos, this one does stand out for what it does.

    I'll give it a thumbs up, and probably try it out myself sometime shortly.

    Le_FourbeApr 5, 2025· 1 reaction

    Because this Lora is actually useful.

    Sometimes it's not just about stuff looking good

    riiahworldApr 6, 2025· 7 reactions

    your comment shows you know nothing. its about having different positions in lora making this is a brilliant model

    LazmanApr 6, 2025

    @Le_Fourbe It's useful to see people turn on the spot? I mean, isn't it kinda about it looking cool if it's literally just gonna be used to make a 3-6 second video clip? why would someone watch it if it didn't look cool? lol..

    LazmanApr 6, 2025

    @riiahworld "positions in lora" but the person doesn't move/go anywhere. It's better than the basic img2vid(with the basic wobbley character zoom-in effect), but I've seen more action outta txt2vid, maybe it's good if it can be used with img2vid?

    getswoll1986Apr 6, 2025· 6 reactions

    It's perfect for creating Loras of characters from a ton of different angles with one image particularly if you want to maintain character consistency.

    Le_FourbeApr 6, 2025· 2 reactions

    @Lazman as the other said :
    sometime you wanna recreate a character you made with AI. one such thing that would allow you to make it is Lora.
    to make a lora you have to feed multiple consistent images in different angles and position.
    this 360 tool is powerful for Lora TRAINING as it will provide the model with every sides of your single picture for whatever design you have. making a more out of your original image.

    LazmanApr 11, 2025· 1 reaction

    @getswoll1986 That's funny, I actually thought about this very thing just a couple days ago when thinking about this. Yea, in that context, it could certainly be useful.

    LazmanApr 11, 2025· 1 reaction

    @Le_Fourbe Yep, that could work. Question though; I wonder if it would work on characters with tails..? Or unique characters in general. I mean, characters with less predictable alternate angles. due to uniqueness of style, or can you feed more than 1 image into it to give it a better idea(assuming the images had the character at the same stance/proportions)?

    Le_FourbeApr 11, 2025· 1 reaction

    @Lazman eventually you will have to add that specific aspect in your mind manually (with help of image AI).
    back side you will get will be random but would follow basic logic of the front side. which is still a good shortcut compared to generating character sheet.

    loneillustratorApr 6, 2025· 2 reactions
    CivitAI

    anyone having vertigo after seeing the examples? my head was spinning

    cc5kongApr 18, 2025· 10 reactions
    CivitAI

    My model rotated, but it did not rotate 360 ​​degrees, but only rotated 120 degrees. I set it to 360 degrees. How can I solve this problem?

    MrSmith2025Jul 3, 2025

    Maybe you need to say 720 degree? :D

    GardaXAug 23, 2025

    Increase animation time. Give it like 65 frames or 81 frames. offload models to RAM

    joakimkunzdesign128Aug 31, 2025· 1 reaction

    @GardaX Doesn't matter if you give it 16 or 81 frames - the rotation stays the same.

    bornfreegirlzJun 4, 2025· 1 reaction
    CivitAI

    Rad. Can you point me in the direction of a lora training workflow using a video dataset?

    demuzinc511Jun 28, 2025· 1 reaction
    CivitAI

    Thank you for lora. How about license? Can it be used in commercial generations?

    zczcgJul 7, 2025· 1 reaction
    CivitAI

    can right rotation?Almost output left rotation

    Reverse the video

    yikifoolerJul 9, 2025
    CivitAI

    1.3B t2v wan 2.1 vace?

    paju1986182Aug 21, 2025· 9 reactions
    CivitAI

    can we have a wan 2.2 version?

    joakimkunzdesign128Aug 25, 2025· 2 reactions
    CivitAI

    Only ever get about 270 degrees with the settings you've recommended - never the full 360. Adding frames or rephrasing it as fx 720 degrees makes zero difference. Could you provide some sort of solution to this? I see a number of other people having the same problem.

    birdboxAug 29, 2025· 8 reactions
    CivitAI

    can you do one for wan 2.2

    crescentvelvetSep 13, 2025
    CivitAI

    Hello, I am interested in learning more about LoRa. Could you please share insights on how you trained the model, the amount of data used for training, and the duration of the training process? Thank you!