CivArchive
    Experiment - Depth Map - V1
    NSFW
    Preview 125883691
    Preview 125883692
    Preview 125883693

    Recently, I was playing with a workflow that required passing generated images through a depth estimation model and I thought: why not generate depth maps directly?

    The results have been, well, favorable. It seems to struggle with some objects that are black/white naturally, and it also tends to give it too much detail sometimes. With that said, it might be still useful.

    My use for this is as follows:

    To start, I generate a few images around what I want, ending up with something like this:

    Which I can then bash around and edit on Photoshop to get something I like. Since it's all in grayscale, that makes it much simpler to work with. After that, I'll end with something like this:

    With that, I can do another T2I run with a depth ControlNet, and that will give me a result like this:

    The point is primarily to simplify the editing work, and to avoid the artifacts from my usual I2I workflow.

    Hopefully that can be useful to some of you as well!

    Description

    The model was trained with a "depth map" trigger word, but in my experiments, it wasn't really a requirement for the desired effect.

    LORA
    Illustrious

    Details

    Downloads
    35
    Platform
    CivitAI
    Platform Status
    Available
    Created
    3/30/2026
    Updated
    4/2/2026
    Deleted
    -

    Files

    KB7V3KZ11TX0D93M74Z22W5NH0.safetensors

    Mirrors

    Available On (1 platform)