CivArchive
    Anima Image-to-Image Workflow - v1.0
    Preview 130185322

    This ComfyUI workflow is designed for Anima image-to-image generation, anime-style image reconstruction, prompt-assisted redraw, and controlled visual transformation from an existing image. The main purpose of this workflow is to let creators upload a source image, automatically analyze it with Qwen3-VL, convert the image content into a useful text description, and then use Anima Preview to redraw or transform the image through an image-to-image pipeline.

    Unlike a pure text-to-image workflow, this graph starts from an existing image. The source image provides the original structure, composition, subject placement, and visual direction. The model then uses the generated prompt and the encoded image latent to create a new result based on that input. This makes the workflow useful when you already have an image idea but want to restyle it, improve it, anime-fy it, rebuild it with Anima, or create a controlled variation without starting from zero.

    The workflow is built around Anima Preview, using anima-preview.safetensors as the main diffusion model. It also uses qwen_3_06b_base.safetensors as the CLIP/text encoder and qwen_image_vae.safetensors as the VAE. This gives the workflow a lightweight but practical image-to-image generation structure for anime-style and illustration-style reconstruction.

    One of the most useful parts of the workflow is the Qwen3VLProcessor node. The source image is passed into Qwen3-VL with the instruction “Describe this anime image.” Qwen3-VL then generates a text description of the image content. This response is connected directly into the positive prompt CLIPTextEncode node. In other words, the workflow can automatically turn the uploaded image into a prompt, then use that prompt to guide the Anima redraw process.

    This automatic prompt reconstruction is useful for users who do not want to manually describe every detail in the source image. If the image contains a character, clothing, pose, background, color palette, or scene style, Qwen3-VL can produce a descriptive prompt that gives Anima a clearer semantic direction. This helps the workflow preserve the image concept while still allowing the model to rebuild the final result.

    The source image is first loaded through LoadImage, then resized through image_scale_pixel_v2. The resize node controls the total pixel count and aligns the image to a model-friendly grid. In the uploaded setup, the total pixel value is around 1.0486 megapixels with 64 alignment. This helps keep the workflow stable and makes the image suitable for VAE encoding and Anima sampling.

    The resized image is then sent to VAEEncode. This converts the source image into latent space, allowing the KSampler to use it as the image-to-image starting point. This is the core difference between this workflow and normal text-to-image generation. The sampler is not generating from empty noise only; it is using the original image latent as the base structure.

    The prompt route has two parts. The positive prompt is generated from Qwen3-VL and encoded through CLIPTextEncode. The negative prompt is manually defined and includes terms such as low quality, worst quality, blurry, bad anatomy, bad hands, extra fingers, fused fingers, deformed face, text, watermark, logo, and JPEG artifacts. This negative prompt is practical for anime and illustration workflows because it suppresses common generation problems.

    The main generation stage uses KSampler. In the uploaded setup, the sampler uses around 31 steps, CFG 4, er_sde sampler, simple scheduler, and a denoise value around 0.7. This is a medium-to-strong image-to-image redraw setting. It allows the model to transform the source image noticeably while still using the original image as structural guidance.

    The denoise value is important. A low denoise value would preserve more of the original image and make only light changes. A high denoise value would allow a stronger redraw and more style transformation. The included value around 0.7 is suitable when the user wants Anima to reconstruct the image with a visible model style rather than only perform a small enhancement. It is a good starting point for anime conversion, character redraw, image cleanup, style transfer, and creative variation.

    After sampling, VAEDecode converts the latent output back into an image. The result is then shown through PreviewImage and saved through SaveImage. This keeps the workflow simple and direct: load image, analyze image, encode image, redraw with Anima, preview, and save.

    This workflow is especially useful for anime image-to-image tasks. It can take a rough image, an old generated image, a reference illustration, a screenshot-like input, or a draft composition and rebuild it into a cleaner Anima-style output. It is also useful for testing how Anima interprets image structure when guided by an automatically generated Qwen3-VL prompt.

    The workflow is also useful for prompt learning. Because Qwen3-VL describes the input image, users can see how the model interprets the source image. This can help creators learn what kind of prompt language works for anime-style reconstruction. Users can also manually edit the Qwen3-VL response if they want more control. For example, they can keep the character and pose description, but change the background, clothing, lighting, or style direction before running Anima.

    Main features:

    - Anima image-to-image workflow

    - Uses anima-preview.safetensors

    - Qwen 3 0.6B text encoder support

    - Qwen image VAE support

    - Qwen3-VL automatic image description

    - Source image to positive prompt conversion

    - image_scale_pixel_v2 resolution control

    - VAEEncode image latent initialization

    - KSampler image-to-image redraw

    - Medium-to-strong denoise transformation

    - Negative prompt for anime artifact suppression

    - VAEDecode final image output

    - PreviewImage and SaveImage export

    - Suitable for anime redraw, image variation, style reconstruction, and visual cleanup

    Recommended use cases:

    Anime image-to-image generation, Anima Preview testing, image restyling, anime character redraw, prompt-assisted image reconstruction, source image variation, AI illustration cleanup, draft-to-final image generation, reference image transformation, style exploration, character concept variation, social media cover creation, Civitai example image preparation, RunningHub workflow publishing, YouTube thumbnail testing, and Bilibili cover image production.

    Suggested workflow:

    Start by uploading the source image. The source image should have a clear subject and composition. A character portrait, anime illustration, concept sketch, AI-generated image, or clean reference image works best. If the image is too blurry, too compressed, or visually chaotic, the generated result may also become unstable.

    Let Qwen3-VL analyze the image. The workflow uses the instruction “Describe this anime image,” which is suitable for anime-style inputs. If you want a different type of result, you can change this instruction. For example, you can ask Qwen3-VL to describe the image as a cinematic anime poster, a character design sheet, a detailed fantasy illustration, or a social media cover.

    Review the generated prompt when possible. The Qwen3-VL response becomes the positive prompt for Anima. If the description misses important details, manually add them. If it describes something incorrectly, remove or correct that part. The automatic prompt is convenient, but manual correction can improve final quality.

    Use the resize node to control the working size. The uploaded setup uses a total pixel control and 64-pixel alignment. This is a good general setting for model compatibility. If you want higher detail, increase the total pixel value carefully. If your GPU or cloud environment has limited memory, reduce it.

    Adjust denoise based on the goal. If you want to keep the source image closer to the original, reduce denoise. If you want a stronger Anima redraw or style transformation, keep denoise higher. The included value around 0.7 is suitable for visible reconstruction. For subtle cleanup, try lower values. For more creative transformation, try higher values.

    Use the negative prompt to control common problems. The included negative prompt is practical for anime generation because it suppresses blur, bad anatomy, hand errors, extra fingers, deformed faces, text, watermark, logo, and JPEG artifacts. If your result has specific issues, add targeted negative terms.

    Run several seeds for variation. Image-to-image workflows can produce different outputs from the same source image and prompt. If the first result is not ideal, test a few seeds before changing the whole workflow. If the structure is wrong, reduce denoise. If the result is too close to the source, increase denoise.

    For character images, inspect face identity, hair shape, clothing continuity, hands, and body proportions. For background-heavy images, check perspective, lighting, and object consistency. For cover images, check whether the subject remains clear and whether the final result has enough visual impact.

    If the Qwen3-VL prompt is too plain, add style language manually. You can add terms such as detailed anime illustration, clean line art, cinematic lighting, soft shadows, sharp character design, high-quality rendering, expressive face, refined clothing detail, or dramatic composition. Keep the prompt focused and avoid conflicting style instructions.

    This workflow is designed as a practical Anima image-to-image pipeline for ComfyUI users. It combines image loading, automatic image captioning, prompt conditioning, latent image-to-image sampling, and final output into one simple graph. It is especially useful for creators who want a fast and controllable way to turn an existing image into a new Anima-style result without writing the entire prompt manually.

    🎥 YouTube Video Tutorial

    Want to know what this workflow actually does and how to start fast?

    This video explains what the tool is, how to launch the workflow instantly, and shares my core design logic — no local setup, no complicated environment.

    Everything starts directly on RunningHub, so you can experience it in action first.

    👉 YouTube Tutorial: https://youtu.be/J2A8JWDCUhk

    Before you begin, I recommend watching the video thoroughly — getting the full context helps you understand the tool faster and avoid common detours.

    ⚙️ RunningHub Workflow

    Try the workflow online right now — no installation required.

    👉 Workflow: https://www.runninghub.ai/post/2021929700087566337/?inviteCode=rh-v1111

    If the results meet your expectations, you can later deploy it locally for customization.

    🎁 Fan Benefits: Register to get 1000 points + daily login 100 points — enjoy 4090 performance and 48 GB super power!

    📺 Bilibili Updates (Mainland China & Asia-Pacific)

    If you’re in the Asia-Pacific region, you can watch the video below to see the workflow demonstration and creative breakdown.

    📺 Bilibili Video: https://www.bilibili.com/video/BV1FscqzREni/

    ☕ Support Me on Ko-fi

    If you find my content helpful and want to support future creations, you can buy me a coffee ☕.

    Every bit of support helps me keep creating — just like a spark that can ignite a blazing flame.

    👉 Ko-fi: https://ko-fi.com/aiksk

    💼 Business Contact

    For collaboration or inquiries, please contact aiksk95 on WeChat.

    🎥 YouTube 视频教程

    想了解这个工作流到底是怎样的工具,以及如何快速启动?

    视频主要介绍 工具定位、快速启动方法 和 我的构筑思路。

    我们会直接在 RunningHub 上进行演示,让你第一时间看到实际效果。

    👉 YouTube 教程: https://youtu.be/J2A8JWDCUhk

    开始前建议尽量完整地观看视频 —— 把握整体思路会更快上手,也能少走常见弯路。

    ⚙️ 在线体验工作流

    现在就可以在线体验,无需安装。

    👉 工作流: https://www.runninghub.ai/post/2021929700087566337/?inviteCode=rh-v1111

    打开上方链接即可直接运行该工作流,实时查看生成效果。

    如果觉得效果理想,你也可以在本地进行自定义部署。

    🎁 粉丝福利: 注册即送 1000 积分,每日登录 100 积分,畅玩 4090 体验 48 G 超级性能!

    📺 Bilibili 更新(中国大陆及南亚太地区)

    如果你在中国大陆或南亚太地区,可以通过下方视频查看该工作流的实测效果与构思讲解。

    📺 B站视频: https://www.bilibili.com/video/BV1FscqzREni/

    我会在 夸克网盘 持续更新模型资源:

    👉 https://pan.quark.cn/s/20c6f6f8d87b

    这些资源主要面向本地用户,方便进行创作与学习。

    Description

    FAQ

    Workflows
    Anima

    Details

    Downloads
    114
    Platform
    CivitAI
    Platform Status
    Available
    Created
    5/9/2026
    Updated
    5/14/2026
    Deleted
    -

    Files

    animaImageToImage_v10.zip

    Mirrors

    HuggingFace (1 mirrors)
    CivitAI (1 mirrors)