CivArchive
    ComfyUI-DDD 2D to 3D stereoscopic conversion and 3D stereoscopic generation - low noise
    NSFW
    Preview 102344327
    Preview 102344326
    Preview 102360156
    Preview 102356869
    Preview 102357243

    With this LORA and the ComfyUI-DDD custom node you can convert 2D pictures to stereoscopic 3D or generate 3D stereoscopic pictures with WAN 2.2.


    The resulting 3D images can be 3D side-by-side for viewing with a VR headset or compatible screen, or suitable for cross-eyed viewing.

    The example workflows will be tweaked frequently until the ideal settings are found.

    About 2D to 3D conversion: this method is much slower than other methods based on depth estimation models (such as MiDaS, Depth-Anything, etc.) and offers no control on the depth of the 3D effect. It's a completely different and experimental approach.

    Coming soon :

    • Generate immersive VR180 pictures


    Coming later :

    • Generate immersive VR180 videos


    Description

    Initial release, trained at a low resolution

    FAQ

    Comments (58)

    BlueBSep 25, 2025
    CivitAI

    What's the link to the custom node? The link provided just takes you to a hugginface link to download the same models

    AtraLogika
    Author
    Sep 25, 2025

    The link is fixed, sorry about that.

    QuodCausisSep 25, 2025
    CivitAI

    Finally a Stereo workflow that actually works! Thanks! Spoke too soon (getting stuck during VAE decode)

    Thats on a 4090 24gb (using Q6) model.

    Finally tiled decoding worked (took forever), however in I2V (my image changed DRASTICALLY)

    AtraLogika
    Author
    Sep 25, 2025

    Are you using the example workflow? Any errors in the console? The example workflows have been tested on 12 GB VRAM and 32 GB RAM with the --lowvram flag when launching ComfyUI
    Make sure the "length" is set to 9 frames in the WanImageToVideo node.

    AtraLogika
    Author
    Sep 25, 2025

    And don't go too crazy with the image resolution. If the vae decode is taking a very long time, it's because you're decoding a very high resolution image. I don't recommend going much above 1440 x 1440

    QuodCausisSep 25, 2025

    @AtraLogika Yes I used your precise workflow, didnt adjust anything.

    QuodCausisSep 25, 2025

    My results look nothing like input image.

    JellaiSep 25, 2025· 1 reaction
    CivitAI

    Wow... it really really works. The most solid approach I've ever seen. How did you train this? Did you use true stereoscopic video data? Or 3D conversions as data, like how most movies are made 3D?

    I cannot wait for the VR180 stuff. Crazy exciting, but seems impossible to get decent resolution for that kind of video.

    Adaptalab0rSep 25, 2025· 1 reaction
    CivitAI

    Oh wow! I'm too excited. Seems to be like a wish come true. Unfortunately I got stuck, as the node "Create3DImage" won't work. Updated ComfyUI and installed it via git clone. Somebody experience this and solved it? It does not appear on the missing nodes either, or did it for you guys?

    AtraLogika
    Author
    Sep 25, 2025· 1 reaction

    What do you mean it won't work? If it's missing in ComfyUI, simply make sure the python file is in your custom_nodes folder and restart ComfyUI.

    GorfouSep 25, 2025· 2 reactions

    Hello, same problem for me. The init file of the custom node is missing :
    File "<frozen importlib._bootstrap_external>", line 995, in exec_module

    File "<frozen importlib._bootstrap_external>", line 1132, in get_code

    File "<frozen importlib._bootstrap_external>", line 1190, in get_data

    FileNotFoundError: [Errno 2] No such file or directory: 'F:\\ComfyUI\\ComfyUI\\custom_nodes\\ComfyUI-DDD\\__init__.py'

    update : seems to work if the file is copied into the custom_nodes root folder

    Adaptalab0rSep 25, 2025

    @AtraLogika It is standard procedure to go to */custom nodes/ and enter cmd "git clone [github repo]" That is when usually a subfolder is automatically created, which makes it tidy and clean. So Thanks to @Gorfou I just put "comfyui-ddd.py" into the root folder, which worked, although it's a little messy.

    Thank you for the Lora!

    Adaptalab0rSep 25, 2025· 1 reaction

    @Gorfou Thank you for your fast response :-)

    AtraLogika
    Author
    Sep 26, 2025· 1 reaction

    @Gorfou @Adaptalab0r Thanks for your feedback. The custom node can now be installed with git clone.

    GorfouSep 26, 2025

    @AtraLogika Thank you ! It's ok now ! :)

    cinemaVerseSep 25, 2025
    CivitAI

    @AtraLogika very good job. I was waiting for something like this. Thank you very much :)

    rjoxSep 25, 2025· 1 reaction
    CivitAI

    i thought the samples were before/afters and felt like Pam Beesly for a sec

    funscripter627Sep 25, 2025· 10 reactions
    CivitAI

    Bro if you manage to get this working for video with actual lens distortion I will give you my firstborn child lmao

    williamdebaskervilleSep 26, 2025· 3 reactions

    Something tells me if you get 3D video support, there is no chance you will have a kid at any point in your life

    GsssyikSep 25, 2025
    CivitAI

    I just use the Spatial Media Toolkit app to convert stuff, it can do large batches and works flawlessly. Will try this out for sure

    FferrettSep 26, 2025
    CivitAI

    Works very well using a Q8 model and 5090 FE. Now just have to setup a batch folder process.

    tcla75542Sep 26, 2025
    CivitAI

    I have the comfyui-ddd.py in the root folder but I am still getting Missing Node Types

    When loading the graph, the following node types were not found "Create3DImage"

    I used the git clone cmd and I have a ComfyUI-DDD folder in my custom nodes folder and in that folder I have the workflow folder the readme.md and the comfyui-ddd.py but it still won't load. Comfyui is updated and when I click on manager after it can't find the node it asks me to install ComfyUI-GGUF GGUF Quantization support for native ComfyUI models. Which never seems to install because it keeps coming up.

    AtraLogika
    Author
    Sep 26, 2025

    I don't know why it doesn't work when placed in a subfolder, but when I find out I will fix the problem. Until then, simply place the py file directly in the custom_nodes folder. Do not use git clone.

    AtraLogika
    Author
    Sep 26, 2025

    The github repo is updated and the custom node can now be installed with git clone

    tcla75542Sep 26, 2025

    Ya Chatgpt fixed it for me. I have to say I still don't understand the reason it didn't work but I followed what it said and it worked.

    The reason you still get Missing Node Types: Create3DImage is almost certainly that DDD isn’t actually being imported at startup (a common hiccup with that repo). The fix is to add a tiny __init__.py “shim” inside ComfyUI-DDD so ComfyUI can import it cleanly—even when the main file is named comfyui-ddd.py.

    jonog247634Sep 26, 2025

    @AtraLogika not for me, when I load the workflow nothing happens? Scratch that - I didn't realize you have to load the workflow directly from within custom nodes.

    JellaiSep 26, 2025· 1 reaction
    CivitAI

    This lora is perfect. I do have a suggesting for the naming though. In generative AI, I think "2D to 3D" implies that you put in an anime image, and it makes it CG/real looking. I think that's why some people are confused, thinking you're showing before and afters. As someone who is VERY into this topic, I was even confused for a while.

    Maybe call it "Stereoscopic Conversion" or something. Maybe even "SBS Stereoscopic Conversion", as that clarifies the method of 3D conversion. Or if you want to add clarity to people who don't know what "Stereoscopic" means, you can call it "Stereoscopic Conversion - Standard to 3D".

    AtraLogika
    Author
    Sep 26, 2025· 1 reaction

    Great suggestion. I added a few "stereoscopic" here and there to clarify things. I'm glad you enjoy this lora.

    baihsanSep 27, 2025· 2 reactions
    CivitAI

    this is what i wait for long, the result is impressive, can't it for 180

    Kung_fu_PronSep 28, 2025
    CivitAI

    just made a workflow that is kinda huge, to convert it to videos instead of pictures, took me 2 hours, but i can send it to you if you want once i fix some things, i lowered it to 2 steps per sampler and the results are good, i do wonder why it needs 9 frames to be generated tho.

    vajurasSep 29, 2025

    I could be wrong but I'm assuming the lora is trained on a transition between each eye's image so you need enough frames to get it there.

    Kung_fu_PronSep 29, 2025

    @vajuras what i did was change the number of required images to be rendered to 5, it takes the second image and 3rd for the effect, lowered the steps to 4. switched the model to wan2.1 Fun, then i used boxes to guide which side the image will always shift to, then i created 81 wan 2.1 fun nodes that takes 81 frames from a video with 🔧 Image From Batch, from index 0 to index 80, renders them and outputs as a video, it's come a long way. it's for personal use so it takes a tiny bit to finish. the results are very passable for now, for still images 9 frames is perfect, but for video it was a disaster

    muttu1989996Oct 28, 2025

    @Kung_fu_porn I am trying to turn 2D video to stereoscopic 3D video and would be ver helpful of you to provide me the workflow if you can.Thanks

    skyrimer3dFeb 2, 2026

    can you share that workflow pls?

    natjoegraphy28Sep 29, 2025
    CivitAI

    You changed the game! but i realise resolution too big will decode mad times. i have added a resize image v2 and resize it to max 1080. now it solve the resolution issue i think, at least its a mobile phone world, this resolution should be fine?

    lorenzo881743Sep 30, 2025
    CivitAI

    How does it compare to nunif iw3?

    twinengines1158Sep 30, 2025

    please test it if you can and let me know aswell, thats what im using too

    ValuedRenderOct 4, 2025· 1 reaction
    CivitAI

    If really need stereo images try use owl3d
    It's seems paid, but actually not. After firs run all models downloaed with working CLI. So you can freely run any lenght convetions as you want from CLI. ( It's located under User/AppData )
    Using it for video to VR convertions.

    fenasikerimOct 13, 2025

    That sounds cool. Any more details?

    GardaXNov 27, 2025

    owl3d sucked badly. It was pseudo stereo or at least it was 2 years ago. This here is so much better than what I saw from owl3d. They offered local installation years ago and I tested it - it eas miserable back then. I wonder if they improved it over the years

    skyrimer3dFeb 2, 2026

    sorry can you explain this in more detail, i love owl3d but it's full paid service, i'm interested in the method you're suggesting.

    bennyboy_77Feb 3, 2026

    @skyrimer3d Look up the open source "IW3" software that is really easy to use and can (as far as I know) do everything Owl3D does but on your local machine and for free. I say this as someone that has used both.

    ValuedRenderFeb 7, 2026

    If you mean more details of how to use owl3d CLI from AppData to run unlimited convertions ? Of course, I can share knowlage for windows at least.
    So if u run owl3d trought GUI once, it will download all the models .ph and setup workflow trought CLI, saved to AppData dir under User Win folder. I really don't remember exact details like folder name, paths, but it's really easy to find it, like tracking it down by running some process via GUI and looking to process tasks execution list via some tool like process manager. So you will find that folder leading you to a folder with CLI exe, that provides CLI interface that you can use without restrictions present in GUI. You may run as many CLI tasks as you want for free

    ValuedRenderFeb 7, 2026

    As for me, I use this CLI interface on a regular basis to convert regular movies to Stereo Video for VR for years in zero costs.

    23423daFeb 21, 2026

    @ValuedRender I found the CLI file you mentioned. However, when I run it, the CLI window closes automatically within a few seconds, and I can't enter anything. I would appreciate it if you could tell me how to use it.

    ValuedRenderFeb 21, 2026· 1 reaction

    \AppData\Local\Owl3D\1.4.7\nvidia\dist\process_desktop_main>process_desktop_main.exe --help

    usage: process_desktop_main.exe [-h] [--jobid JOBID] [--accountid ACCOUNTID] [--videofile VIDEOFILE] [--depthvideofile DEPTHVIDEOFILE]

    [--depthvideofile-legacy DEPTHVIDEOFILE_LEGACY] [--predepth-folder PREDEPTH_FOLDER] [--flo-folder FLO_FOLDER]

    [--videofolder VIDEOFOLDER] [--imgfiles IMGFILES [IMGFILES ...]] [--input-format [INPUT_FORMAT]]

    [--output-formats OUTPUT_FORMATS [OUTPUT_FORMATS ...]]

    [--output-file-formats OUTPUT_FILE_FORMATS [OUTPUT_FILE_FORMATS ...]] [--output-imgseq-format [OUTPUT_IMGSEQ_FORMAT]]

    [--is-dev IS_DEV] [--deep-inpainting-mode [DEEP_INPAINTING_MODE]] [--using-models-v2 [USING_MODELS_V2]]

    [--using-pipeline-v2 [USING_PIPELINE_V2]] [--half-precision-mode [HALF_PRECISION_MODE]]

    [--gpu-encode-mode [GPU_ENCODE_MODE]] [--stereo-input-mode [STEREO_INPUT_MODE]]

    [--using-enhanced-detail [USING_ENHANCED_DETAIL]] [--fisheye-input-mode [FISHEYE_INPUT_MODE]]

    [--stereo-processing-mode [STEREO_PROCESSING_MODE]] [--device [DEVICE]] [--device-selection [DEVICE_SELECTION]]

    [--ffmpeg-bin [FFMPEG_BIN]] [--spatial-bin [SPATIAL_BIN]] [--output-dir [OUTPUT_DIR]]

    [--fallback-output-dir [FALLBACK_OUTPUT_DIR]] [--appdata-dir [APPDATA_DIR]] [--models-dir [MODELS_DIR]]

    [--depth-cache-dir [DEPTH_CACHE_DIR]] [--depth-cache-dir-legacy [DEPTH_CACHE_DIR_LEGACY]]

    [--predepth-cache-dir [PREDEPTH_CACHE_DIR]] [--flo-cache-dir [FLO_CACHE_DIR]]

    [--output-resolution [OUTPUT_RESOLUTION]] [--output-render-modes OUTPUT_RENDER_MODES [OUTPUT_RENDER_MODES ...]]

    [--output-encoding-setting [OUTPUT_ENCODING_SETTING]] [--trim-black-area [TRIM_BLACK_AREA]]

    [--enable-cupy [ENABLE_CUPY]] [--enable-metal [ENABLE_METAL]] [--output-codec [OUTPUT_CODEC]]

    [--video-dur [VIDEO_DUR]] [--video-start-pos [VIDEO_START_POS]] [--subtitle-videofile SUBTITLE_VIDEOFILE]

    [--subtitle-subtitlefile SUBTITLE_SUBTITLEFILE] [--subtitle-fontfile SUBTITLE_FONTFILE]

    [--subtitle-video3dformat [SUBTITLE_VIDEO3DFORMAT]] [--using-stream-process-clip [USING_STREAM_PROCESS_CLIP]]

    [--using-augmented-depth [USING_AUGMENTED_DEPTH]] [--enhanced-detail-mode [ENHANCED_DETAIL_MODE]]

    [--enhanced-detail-resolution ENHANCED_DETAIL_RESOLUTION] [--smooth-depth-mode [SMOOTH_DEPTH_MODE]]

    [--num-smooth-iters NUM_SMOOTH_ITERS] [--using-direct-optical-smooth [USING_DIRECT_OPTICAL_SMOOTH]]

    [--num-smooth-window-size NUM_SMOOTH_WINDOW_SIZE] [--inpainting-mode [INPAINTING_MODE]] [--check-device CHECK_DEVICE]

    [--power-mode [POWER_MODE]]

    optional arguments:

    -h, --help show this help message and exit

    --jobid JOBID A unique Id representing the current job

    --accountid ACCOUNTID

    A unique Id who triggered the job

    --videofile VIDEOFILE

    Video file to be processed

    --depthvideofile DEPTHVIDEOFILE

    Precomputed Depth video file to be processed

    --depthvideofile-legacy DEPTHVIDEOFILE_LEGACY

    Precomputed Depth video file to be processed for old conversions

    --predepth-folder PREDEPTH_FOLDER

    Precomputed predepth folder to be processed

    --flo-folder FLO_FOLDER

    Precomputed flo folder to be processed

    --videofolder VIDEOFOLDER

    Folder of videos to process

    --imgfiles IMGFILES [IMGFILES ...]

    Image files to be processed

    --input-format [INPUT_FORMAT]

    Process for different input format. Available: EQUIRECT_360

    --output-formats OUTPUT_FORMATS [OUTPUT_FORMATS ...]

    Output 3D formats. Available: SBS, TB, RGBD, ANAGLYPH, CROSS

    --output-file-formats OUTPUT_FILE_FORMATS [OUTPUT_FILE_FORMATS ...]

    Output file formats. Available: MP4_VIDEO, PNG_SEQUENCE

    --output-imgseq-format [OUTPUT_IMGSEQ_FORMAT]

    Output file format for img sequence

    --is-dev IS_DEV Flag for development

    --deep-inpainting-mode [DEEP_INPAINTING_MODE]

    Flag for using model based inpainting

    --using-models-v2 [USING_MODELS_V2]

    Flag for using v2 depth model (zoe)

    --using-pipeline-v2 [USING_PIPELINE_V2]

    Flag for using pipeline v2 for keeping original audio and improve av sync

    --half-precision-mode [HALF_PRECISION_MODE]

    Enable or disable half precision mode

    --gpu-encode-mode [GPU_ENCODE_MODE]

    Enable of disable GPU encoding

    --stereo-input-mode [STEREO_INPUT_MODE]

    Enable splitting if input is stereo

    --using-enhanced-detail [USING_ENHANCED_DETAIL]

    Enable enhanced 3D detail mode

    --fisheye-input-mode [FISHEYE_INPUT_MODE]

    Enable defisheye if input is fisheye input. used for vr180

    --stereo-processing-mode [STEREO_PROCESSING_MODE]

    Enable stereo matching for stereo input

    --device [DEVICE] Override which device to run on

    --device-selection [DEVICE_SELECTION]

    Name of the selected device to run on

    --ffmpeg-bin [FFMPEG_BIN]

    Path to the ffmpeg executable binary

    --spatial-bin [SPATIAL_BIN]

    Path to the spatial executable binary

    --output-dir [OUTPUT_DIR]

    Path to where the output data will be stored

    --fallback-output-dir [FALLBACK_OUTPUT_DIR]

    Path to where the output data will be stored if output dir is not available

    --appdata-dir [APPDATA_DIR]

    Path to where the temporary appdata will be stored

    --models-dir [MODELS_DIR]

    Path to where the models are stored

    --depth-cache-dir [DEPTH_CACHE_DIR]

    Path to where the generated depth is stored, if exist

    --depth-cache-dir-legacy [DEPTH_CACHE_DIR_LEGACY]

    Path to where the generated depth is stored, if exist for legacy conversions

    --predepth-cache-dir [PREDEPTH_CACHE_DIR]

    Path to where the generated predepth is stored, if exist

    --flo-cache-dir [FLO_CACHE_DIR]

    Path to where the generated flo is stored, if exist

    --output-resolution [OUTPUT_RESOLUTION]

    Resolution of the output file

    --output-render-modes OUTPUT_RENDER_MODES [OUTPUT_RENDER_MODES ...]

    -9 to 9, 0 to 10, BOTH|LEFT_ONLY|RIGHT_ONLY

    --output-encoding-setting [OUTPUT_ENCODING_SETTING]

    Encoding setting of output video

    --trim-black-area [TRIM_BLACK_AREA]

    Enable or disable trim black area

    --enable-cupy [ENABLE_CUPY]

    Enable or disable CUPY

    --enable-metal [ENABLE_METAL]

    Enable or disable METAL

    --output-codec [OUTPUT_CODEC]

    Output codec for video

    --video-dur [VIDEO_DUR]

    Video duration to process

    --video-start-pos [VIDEO_START_POS]

    Video start pos to process

    --subtitle-videofile SUBTITLE_VIDEOFILE

    File name for the video with subtitle to be added

    --subtitle-subtitlefile SUBTITLE_SUBTITLEFILE

    File name for the subtitle file to be added

    --subtitle-fontfile SUBTITLE_FONTFILE

    File name for the font file to be added

    --subtitle-video3dformat [SUBTITLE_VIDEO3DFORMAT]

    3D video format of the video

    --using-stream-process-clip [USING_STREAM_PROCESS_CLIP]

    Whether to use stream processing or not

    --using-augmented-depth [USING_AUGMENTED_DEPTH]

    Whether to use augmented depth or not

    --enhanced-detail-mode [ENHANCED_DETAIL_MODE]

    Enhanced detail mode to be used

    --enhanced-detail-resolution ENHANCED_DETAIL_RESOLUTION

    Enhanced detail resolution to be used

    --smooth-depth-mode [SMOOTH_DEPTH_MODE]

    Smooth mode to be used

    --num-smooth-iters NUM_SMOOTH_ITERS

    Smooth iterations to execute

    --using-direct-optical-smooth [USING_DIRECT_OPTICAL_SMOOTH]

    Enable or disable direct optical smooth (more computing)

    --num-smooth-window-size NUM_SMOOTH_WINDOW_SIZE

    Smooth iterations to execute. only useful when --using-direct-optical-smooth is true

    --inpainting-mode [INPAINTING_MODE]

    Inpainting mode to be used

    --check-device CHECK_DEVICE

    Flag for checking device mode

    --power-mode [POWER_MODE]

    Power mode to be used



    This is help output

    ValuedRenderFeb 21, 2026· 1 reaction

    @skyrimer3d @23423da @fenasikerim

    Run this file via Winodws CMD, not just open in explorer, this is CLI program.
    I suggest, if you not familiar with CLI, just copy past this output of help to GPT and ask to generate command for something, then just copy past to CMD

    Like

    process_desktop_main.exe ^
    --videofile "D:\video\input.mp4" ^
    --output-formats SBS ^
    --output-file-formats MP4_VIDEO ^
    --output-dir "D:\video\output"

    ValuedRenderFeb 21, 2026

    For 1 hour movie it takes long time about ~ 6 hours, first try with short video to verify settings.
    I usually use a command to process all files in a folder, specifying the incoming and outgoing directories, and then go to bed.

    skyrimer3dFeb 21, 2026

    @ValuedRender Tons of amazing info in this thread, i have to check it all, thank you so much!

    fenasikerimFeb 21, 2026

    @ValuedRender Looks like they fixed that in the newest version:

    2026-02-21 19:11:11 INFO [MainThread:29400] Parent process name: powershell.exe

    [Exception] Unauthorized execution source

    2026-02-21 19:11:11 ERROR [MainThread:29400] Unauthorized execution source

    ValuedRenderFeb 21, 2026

    @fenasikerim 
    Ya, sadly they patched exe in 2.0.3 (((. But 1.4.7 version still works. With all dependencies it's about 3 GB. I can share it, as full ready to go archive.
    shareyougo DOT com SLASH tgbpbrgc ( Expires in 10 days )


    fenasikerimFeb 21, 2026

    @ValuedRender I have tried 1.4.7 but the included python packages seem to be too old for my 5090. Getting a lot of errors even in the official GUI. Not important enough for me to spend time on.

    tcla75542Oct 15, 2025· 6 reactions
    CivitAI

    Any news on the vr180?

    boobkake22Oct 15, 2025

    A proper video solution would be really neat, for sure. Staying tuned.

    SmoovIncrediboNov 26, 2025· 1 reaction
    CivitAI

    this is very impressive!

    On another note, ow.

    GardaXNov 27, 2025· 1 reaction
    CivitAI

    I tested all samples including user samples and the results are very impressive. A true 3D stereo, not fake one. Is this project dead? 3D stereo VR180 would be great to see.

    rafaelldestiloJan 21, 2026

    I'm in the same boat. I used IW3, excellent for flat 3D, and I've also used Any v1, v2, v3, and DeepPro. I even found an excellent 360° LoRa for Z Image Turbo, but what I really want is the 180° LoRa. I'll test this LoRa in comparison to IW3 though.

    yamavishnu892Jan 22, 2026
    CivitAI

    Is there a way to do this I2V instead of T2V?

    LORA
    Wan Video 2.2 I2V-A14B

    Details

    Downloads
    720
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/25/2025
    Updated
    5/13/2026
    Deleted
    -
    Trigger Words:
    a st3r30sc0pic view of

    Files

    stereo140epochs-low.safetensors

    Mirrors

    HuggingFace (1 mirrors)