Next-gen segmentation tool for precise object masking and tracking.
Who it's for: creators who want this pipeline in ComfyUI without assembling nodes from scratch. Not for: one-click results with zero tuning — you still choose inputs, prompts, and settings.
Open preloaded workflow on RunComfy
Open preloaded workflow on RunComfy (browser)
Why RunComfy first
- Fewer missing-node surprises — run the graph in a managed environment before you mirror it locally.
- Quick GPU tryout — useful if your local VRAM or install time is the bottleneck.
- Matches the published JSON — the zip follows the same runnable workflow you can open on RunComfy.
When downloading for local ComfyUI makes sense — you want full control over models on disk, batch scripting, or offline runs.
How to use (local ComfyUI)
1. Load inputs (images/video/audio) in the marked loader nodes.
2. Set prompts, resolution, and seeds; start with a short test run.
3. Export from the Save / Write nodes shown in the graph.
Expectations — First run may pull large weights; cloud runs may require a free RunComfy account.
Overview
With this segmentation workflow, you can easily identify and isolate objects from any image or video frame. It enables precise mask generation and consistent object tracking across frames, saving time in compositing, editing, and post-production. Built for creators seeking control and accuracy, it handles complex visuals effortlessly. The system ensures fast, reliable detection no matter the scene type. Perfect for VFX designers and AI artists aiming for clean and consistent segmentation outputs.
Important nodes:
Key nodes in Comfyui SAM 3 workflow
LoadSAM3Model (#1)
Loads the SAM 3 weights for image tasks. If you swap weights, keep your image lanes consistent so previews and saves reflect the same SAM 3 backbone.
SAM3Segmentation (#82)
Text-driven image segmentation. Provide a clear text prompt describing the target class. If multiple objects are detected, make the description more specific or run multiple passes to collect separate SAM 3 masks.
SAM3Segmentation (#81)
Box-driven image segmentation. Draw one or more tight boxes around the object. Use additional boxes to exclude adjacent regions if the mask bleeds, then re-run to refine the SAM 3 output.
SAM3VideoModelLoader (#69)
Initializes the SAM 3 video model for the clip lane. Keep this consistent with your image model choice if you plan to match looks across stills and footage.
SAM3VideoSegmentation (#78)
Sets the initial selection on the first frame using text, points, or boxes. Start with the simplest cue that cleanly isolates the subject. If the first-frame mask is perfect, propagation will be easier and faster across the rest of the video.
SAM3Propagate (#77)
Propagates the initial mask through the sequence. Adjust its behavior when subjects move quickly, change scale, or partially occlude. If drift appears after a scene change or cut, re-initialize near the cut and propagate again to keep SAM 3 results stable.
SAM3VideoOutput (#76)
Packages the propagated SAM 3 masks and a visualization overlay. Use the overlay MP4 to review quality frame by frame, and use the mask-only MP4 for direct ingest in comp or editorial.
SAM3BBoxCollector (#84)
Interactive box tool for image selection. Draw tight positive boxes and optional negative boxes to guide SAM 3 toward precise boundaries, then preview and iterate.
SAM3PointCollector (#79)
Interactive point tool for video initialization. Add a few well-placed positive and negative clicks on the first frame to steer SAM 3 when text or boxes alone are ambiguous.
…
Notes
SAM 3 in ComfyUI Workflow | Precision Image Segmentation AI — see RunComfy page for the latest node requirements.
Description
Initial release — SAM-3.