Ok guys, many have requested my configuration for LTXV, so here it is! It is a little rough around the edges, and I dont know how or why it works nicely for me.
Some of my takeaways for nice generations:
Be patient. First attempt could make a horrible or completely static result, while the next or third or tenth can be amazing!
Some photos give fantastic result, while others seemingly similar does not. Some photos just dont work well here
Speaking of photos, photo quality is key! You want people moving in the background? Use a photo where people are present in the background and they typically will
Also certain poses give very interesting results, others not. Close portraits can give amazing fun facial expressions.
Usually the llm will generate nice description, but not always. In the cases where the text is bad I usually copy that text, modify it better and add it to the pre text node, and hook yes for own prompt.
I hope this works well for you guys too, and feel free to share your creations here!:D
Credits to this workflow:
LTX IMAGE to VIDEO with STG, CAPTION & CLIP EXTEND workflow - v5.0 (Model 0.9.1) | LTXV Workflows | Civitai
Most of the setup comes from here, and if you want more explanation in boxes and configure stuff yourself I recommend this workflow instead. Also this workflow explains the nodes involved better!
Also amazing work by Lightricks for creating this model:
https://www.lightricks.com/
https://github.com/Lightricks/ComfyUI-LTXVideo
https://github.com/logtd/ComfyUI-LTXTricks
On a side note, if you like the videos and/or photos I would be so happy if you would join my site:
https://www.patreon.com/afterdarkaiart
I spend more and more time on this, and I plan share my portfolio of 60.000+ photos and videos moving forward. Your support will be appreciated, but honestly I will post a lot of nice stuff for free here too:)
Description
Updated nodes used:)
FAQ
Comments (27)
Very well done, the consistency approach is excellent!
Thank you! It鈥檚 mostly trial and error, but when I found some numbers that worked well I鈥檓 clinging on to it馃槄 I鈥檝e looked into a lot of your resources, a lot of great work, much respect!
@PreviousScheme9737506聽I'm trying to incorporate some of your core concepts into my multi-sequence vid WF if you want to give it a go it'd be an amazing marriage between the two (the strong fidelity from your last frame of the video would easily be a good starting point for the next sequence, etc.) - https://civitai.com/models/1096868/video-tutorial-resources-ltx-sequencing-workflow-to-create-long-video-clips-new-model
Broke my ComfyUI install, as it uses nodes with outdated dependencies. Setting up this was a complete waste of time. I have to reinstall almost everything to do with ComfyUI. Thanks
It was something about a node you are using requiring an older version of the tokenizers. And I couldn't downgrade at that point ComfyUI was not able to load anymore. But I have reinstalled everything now to what i had.
@BlueToothSpeaker聽I'm sorry to hear that, bad luck when it messes up your install. But thank you for the heads-up, my first workflow I posted so I was a bit too eager:) I updated everything and made a version 2 now, hopefully it will work better
@PreviousScheme9737506聽Cool, I will give it another try today
This version still destroys ComfyUI. The custom nodes break dependencies so badly that there is no way I could find to recover. It actually destroyed pip's installation record so that tokenizers couldn't be uninstalled/reinstalled normally (pip had no record of it being installed, but the directory and all the files were still present). Even once I managed to restore tokenizers, I had so many dependency errors that it will be easier reinstall everything. For all intents and purposes, several of the nodes used by this workflow should be considered malware.
i managed to get all necessary nodes, but when i hit queue nothing happens, not even an error. and if i switch tab to another workflow and then go back, the whole workflow disappears. any idea what's going on?
Hmm I鈥檓 not sure, it sounds like something I鈥檝e experienced sometimes with other workflows but I never figured out why. For me this one obviously works normally馃槄 no error messages in console? And you have selected model and start photo? Maybe id try restart comfyui at least, but except from this I鈥檓 blank, sorry
Oh and maybe update comfyui and all nodes
I experienced something similar, but it was related to the Checkpoint (Load Checkpoint node). By default, it was configured to use a checkpoint under a folder. Set ckpt_name to the actual checkpoint and it will hopefully work.
your the best
any tips for when they rotate there body and get deformed?
Not really, it happens regularly for me too. I realize certain resolutions work better for these scenarios than others, 640x480 for example seems to create decent results even when the subject is turning around or moving for example their hands. But this workflow, and my vision, is to experiment and create ok results with higher and more cusom resolutions. But it requries some repetition as some gens simply turn out badly
@PreviousScheme9737506聽gotcha thank you
so I guess LTXV can't do actual hardcore animation?
no it cant even do nudity unless you image to video
A perfect tool if I need a video of a talking girl holding a penis :D
It's something :P
i can' install ReActorRestorFace. How can i get them?
Please help
Hmm, I dont really know how to help you here, I tried now this workflow wit ha fresh comfyui install and everything installed normally. I see that there is an older version of reactor not in use anymore, but the standard suggestion for download should be the new one. It seems to be this if you want to/must install manually:
https://github.com/Gourieff/ComfyUI-ReActor
https://github.com/Gourieff/ComfyUI-ReActor?tab=readme-ov-file#troubleshooting follow the instructions
Not sure why, but it doesn't look as though the Florence autocaption is working properly. The Auto generated text never changes. [FIXED - refer below]
What I initially found was that all my videos included a default shower, but once I changed a couple of text boxes, it's working well.
Results are pretty good.
Given how many nodes there are, I can't imagine this was an easy to create workflow, but you've done well. :-)
FIXED - The issue was related to the model used by Florence2 - changed it from the default to microsoft/Florence-2-large - and now it's updating just fine.
How do I make it use the image? Every video is similar but not the same person.
Are you sure this workflow is correct?
I鈥檓 getting an error on the SamplerCustom model input and on the LTXVScheduler latent input.
Can someone help me?
Thanks!
I think this workflow now is a bit outdated, personally I don鈥檛 use or maintain it anymore as I found other workflows catering to my needs, as newer version of wan was released etc. I recommend daxamur workflows, his i2v workflow is what I use these days with minimal modifications