CivArchive
    JoyAI Image | Single-Image Editing Workflow - v1.0
    NSFW
    Preview 130485834

    This workflow is designed for JoyAI-Image single-image editing, focusing on controlled transformation from one source image through a direct text instruction. Its main purpose is to let creators upload one image, describe the intended edit, and generate a new image that follows the instruction while preserving the important identity, structure, and visual logic of the original picture.

    The workflow uses the JoyAI-Image editing route with JoyAI-Image-Und-merger_bf16 as the CLIP / understanding component, joy_image_transformer as the main image transformer model, Wan2.1_VAE.pth as the VAE, JoyAI_Image_ENCODER for image-and-prompt conditioning, JoyAI_Image_LATENTS for latent preparation, JoyAI_Image_SM_KSampler for sampling, JoyAI_Vae_Decoder for decoding, and SaveImage for final output. Compared with a pure text-to-image workflow, this setup is built around editing an existing image rather than generating everything from scratch.

    The core strength of this workflow is instruction-based single-image editing. The input image is first loaded, scaled to the target working size, and passed into the JoyAI encoder together with the prompt. This allows the model to understand both the visual content of the image and the requested change. The workflow can be used for object movement, object rotation, camera-view changes, background replacement, character scene transfer, poster-style redesign, and visual composition correction.

    The uploaded workflow includes several practical prompt templates. For object movement, the instruction can follow a format such as moving a specific object into a target area and then removing the guide box. For view rotation, the prompt can ask the model to rotate an object to show a front, left, right, rear, front-right, front-left, rear-right, or rear-left side view. For camera control, the workflow provides a structured prompt format: move the camera, set yaw and pitch rotation, define zoom in / out / unchanged, and keep the 3D scene static while changing only the viewpoint.

    The example editing prompt in the workflow is a more complex character-scene transformation. It asks the model to keep the same blue-haired cyber mechanical girl, preserve her facial features, blue-pink gradient twin tails, mechanical ear-side devices, neck connection structure, and metallic body, while replacing and expanding the background into a realistic open grassland scene. It also asks for running motion, wind-blown hair, natural mechanical limb dynamics, sunlight, ground reflection, character shadow, metal highlights, and clean integration with the outdoor environment. This shows the workflow is not limited to tiny edits; it can handle larger visual changes when the prompt clearly tells the model what must remain and what should change.

    The workflow is especially useful when you need a controllable one-image editing tool for character redesign, product movement, object repositioning, background replacement, camera angle testing, scene expansion, pose reinterpretation, and creative visual iteration. The key is to write the prompt like an editing command: keep the important identity features, define the target change, specify what should not change, and describe how lighting, shadow, texture, and perspective should blend naturally.

    If you want to see how the JoyAI image encoder, single-image prompt instruction, latent editing route, sampler, and final image output work together, watch the full tutorial from the YouTube link above.

    ⚙️ Try the Workflow Online

    👉 Workflow: https://www.runninghub.ai/post/2043308514138918913?inviteCode=rh-v1111

    Open the link above to run the workflow directly online and view the generation results in real time.

    If the results meet your expectations, you can also deploy it locally for further customization.

    🎁 Fan Benefits: Register now to get 1000 points, plus 100 daily login points — enjoy 4090-level performance and 48 GB of powerful compute!

    📺 Bilibili Updates (Mainland China & Asia-Pacific)

    If you are in Mainland China or the Asia-Pacific region, you can watch the video below for workflow demos and a detailed creative breakdown.

    📺 Bilibili Video: https://www.bilibili.com/video/BV1MEQbBsE5z/

    I will continue updating model resources on Quark Drive:

    👉 https://pan.quark.cn/s/20c6f6f8d87b

    These resources are mainly prepared for local users, making creation and learning more convenient.

    ⚙️ 在线体验工作流

    👉 工作流: https://www.runninghub.ai/post/2043308514138918913?inviteCode=rh-v1111

    打开上方链接即可直接运行该工作流,实时查看生成效果。

    如果觉得效果理想,你也可以在本地进行自定义部署。

    🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

    📺 Bilibili 更新(中国大陆及南亚太地区)

    如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

    📺 B站视频: https://www.bilibili.com/video/BV1MEQbBsE5z/

    我会在 夸克网盘 持续更新模型资源:

    👉 https://pan.quark.cn/s/20c6f6f8d87b

    这些资源主要面向本地用户,方便进行创作与学习。

    Description

    Workflows
    LTXV 2.3

    Details

    Downloads
    28
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/12/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    joyaiImageSingleImage_v10.zip

    Mirrors