Pro version of Babes_By_Stable_Yogi Pony is available on my Patreon
Onsite generations are permanently available on this version:
π Babes_By_Stable_Yogi: https://civarchive.com/models/174687?modelVersionId=2110984
Babes By Stable Yogi V6.5
V6.5 is a substantial step up from V6.0.
Anatomy, Expressions and gestures, Environments, Pose stability
A lot of these fixes came directly from feedback identified by our Patreon community β folks running the in-development checkpoints, noting what broke, and helping us prioritize what to fix for the next version.
## π Be part of the next version
We run a free image generator at discord.gg/DZEenb5wGc that ships the next-in-development checkpoint inline. Explorers generate with it, report shortfalls, and we record that feedback into the build queue for the following version.
If you've ever wanted to be part of how a checkpoint is actually made β hop in. Free to use, no install, no setup. Generate, find issues, tell us, and you'll see your fixes land in the next release.
## File lineup
Six format variants ship for general use:
(FP16) | Default for most users β best quality / speed balance |
(BF16) | RTX 3000+ with BF16 support, slight speed edge |
(FP8 Scaled) | ~8 GB VRAM cards β Forge & ComfyUI native, no extra nodes |
(NF4) | Lowest VRAM (4-6 GB) β Forge native, ComfyUI with bnb_nf4 node |
(Q6_K) | Highest quality GGUF β needs ComfyUI-GGUF or [my Forge GGUF extension for forge ui] (https://github.com/brandulateai/sd-forge-sdxl-gguf-brandulateai) |
DMD2 β 4 steps, CFG 1.2, LCM sampler. Fastest option |
## Recommended settings
Standard variants:
- Sampler: Euler a or DPM++ 2M
- Scheduler: Karras
- Steps: 25β30
- CFG: 5β7
DMD2 variant only:
- Sampler: LCM
- Steps: 4 (yes, four)
- CFG: 1.2
Resolution: native SDXL β 1024Γ1024, 832Γ1216, 1216Γ832 work cleanly.
## Where else to find me
Pro variants and feedback: [Patreon](https://patreon.com/brandulate)
(https://github.com/brandulateai)
Free generator with the next checkpoint waiting for your feedback at Discord Community: https://discord.gg/DZEenb5wGc
Join me on my Patreon. for exclusive perks and early access to unique resources.
To discuss custom LoRa's or models, feel free to connect on Discord.
π Like this model to keep me motivated and inspired to create more!
π¬ Drop a comment and let me know what you'd love to see next.
π Review this model to help me improve and make even better creations.
π Hit that notification bell to stay updated with my latest models and updates!
Important Usage Tips
Add Stable_Yogis_PDXL_Positives at the beginning of your prompt section.
Add Stable_Yogis_PDXL_Negatives-neg at the beginning of your negative prompt section.
Description
## About this version β Q4_0 & Q8_0 GGUFs
This page hosts the two GGUF variants of Babes By Stable Yogi Pony V6.5 β a small (Q4_0) and a near-lossless (Q8_0) build, both pre-bundled with CLIP-L, CLIP-G, and VAE so they load like a normal checkpoint.
### Which one should I download?
| File | Size | Quality vs FP16 | Best for |
|---|---|---|---|
| Q4_0 (~2.7 GB) | smaller, lighter | ~95%, slight softness in fine details | 6-8 GB cards (RTX 3050, 2060, 1660, 4060) β makes SDXL actually run comfortably on budget hardware |
| Q8_0 (~3.9 GB) | near-original size | ~99%, visually identical to FP16 | 12-16 GB cards (RTX 3060 12GB, 4070, 4080) β full quality with room left for LoRAs / ControlNet / hires fix |
Rule of thumb:
- VRAM tight? β Q4_0
- VRAM fine, just want a smaller file? β Q8_0
- 24 GB+ card with no constraints? β grab the FP16/BF16 instead (separate version on the model page)
### How to use these files
Both are GGUF format, which needs a loader:
- Forge / Forge Neo β install [my SDXL GGUF extension](https://github.com/brandulateai/sd-forge-sdxl-gguf-brandulateai), restart, and the GGUFs appear in your standard checkpoint dropdown like any safetensors file.
- ComfyUI β install the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node and use its GGUF loader.
### Recommended settings (both files)
- Sampler: Euler a or DPM++ 2M
- Scheduler: Karras
- Steps: 25-30
- CFG: 5-7
- Resolution: 1024Γ1024 (or any SDXL-native size: 832Γ1216, 1216Γ832)
### Why GGUF at all?
GGUF compresses the model so it fits on less VRAM without dramatic quality loss. If you've ever tried to run SDXL on a 6-8 GB card and it crawled or crashed β that's the problem these files solve. The Q4_0 makes SDXL accessible on hardware that "shouldn't" handle it; the Q8_0 lets you free up VRAM for everything else in your workflow.
### Support our work
If V6.5 helped you, support the project:
- Patreon: Patreon
- Discord (Brandulate Server): https://discord.gg/DZEenb5wGc

