ComfyUI + Reallusion = Speed, Accessibility, and Ease for 3D visuals
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
This workflow pipes prompts and 3D-derived guidance (pose, depth, edges, normals) from Character Creator or iClone straight into ComfyUI to generate consistent, scene-accurate images or sequences. Leverage Reallusion AI Render’s custom nodes to generate highly consistent 3D-guided visuals for characters and scenes directly from RL iClone or Character Creator. You can iterate quickly in Reallusion’s real-time environment while the ComfyUI graph handles precise, reproducible rendering with room for deep customization.
Important nodes:
AI Render Image-to-Image Workflow Tutorial
AI Render Video-to-Video Workflow Tutorial
Notes
ComfyUI Reallusion AI Render Workflows Collection | Use popular models for 3D-Control, Canny, Pose, Depth — see RunComfy page for the latest node requirements.
Description
Initial release — Reallusion-AI-Render.
