Pro version of Babes_By_Stable_Yogi Pony is available on my Patreon
Onsite generations are permanently available on this version:
π Babes_By_Stable_Yogi: https://civarchive.com/models/174687?modelVersionId=2110984
Babes By Stable Yogi V6.5
V6.5 is a substantial step up from V6.0.
Anatomy, Expressions and gestures, Environments, Pose stability
A lot of these fixes came directly from feedback identified by our Patreon community β folks running the in-development checkpoints, noting what broke, and helping us prioritize what to fix for the next version.
## π Be part of the next version
We run a free image generator at discord.gg/DZEenb5wGc that ships the next-in-development checkpoint inline. Explorers generate with it, report shortfalls, and we record that feedback into the build queue for the following version.
If you've ever wanted to be part of how a checkpoint is actually made β hop in. Free to use, no install, no setup. Generate, find issues, tell us, and you'll see your fixes land in the next release.
## File lineup
Six format variants ship for general use:
(FP16) | Default for most users β best quality / speed balance |
(BF16) | RTX 3000+ with BF16 support, slight speed edge |
(FP8 Scaled) | ~8 GB VRAM cards β Forge & ComfyUI native, no extra nodes |
(NF4) | Lowest VRAM (4-6 GB) β Forge native, ComfyUI with bnb_nf4 node |
(Q6_K) | Highest quality GGUF β needs ComfyUI-GGUF or [my Forge GGUF extension for forge ui] (https://github.com/brandulateai/sd-forge-sdxl-gguf-brandulateai) |
DMD2 β 4 steps, CFG 1.2, LCM sampler. Fastest option |
## Recommended settings
Standard variants:
- Sampler: Euler a or DPM++ 2M
- Scheduler: Karras
- Steps: 25β30
- CFG: 5β7
DMD2 variant only:
- Sampler: LCM
- Steps: 4 (yes, four)
- CFG: 1.2
Resolution: native SDXL β 1024Γ1024, 832Γ1216, 1216Γ832 work cleanly.
## Where else to find me
Pro variants and feedback: [Patreon](https://patreon.com/brandulate)
(https://github.com/brandulateai)
Free generator with the next checkpoint waiting for your feedback at Discord Community: https://discord.gg/DZEenb5wGc
Join me on my Patreon. for exclusive perks and early access to unique resources.
To discuss custom LoRa's or models, feel free to connect on Discord.
π Like this model to keep me motivated and inspired to create more!
π¬ Drop a comment and let me know what you'd love to see next.
π Review this model to help me improve and make even better creations.
π Hit that notification bell to stay updated with my latest models and updates!
Important Usage Tips
Add Stable_Yogis_PDXL_Positives at the beginning of your prompt section.
Add Stable_Yogis_PDXL_Negatives-neg at the beginning of your negative prompt section.
Description
## About this version β FP8 Scaled
This is the FP8 Scaled build of Babes By Stable Yogi Pony V6.5 β a near-FP16-quality variant in a smaller .safetensors file (~3.2 GB), loadable natively in any modern SD UI without extra extensions or custom nodes.
### Who this is for
- 8-12 GB VRAM cards (RTX 3060, 4060 Ti, 4070, 2080 Ti, 3070, 3080) β fits comfortably with room for LoRAs and hires fix
- Anyone who wants near-FP16 quality without the GGUF loader dance β drop the file into models/Stable-diffusion/, pick from the dropdown, done
- Users on stock Forge / ComfyUI / A1111 setups β no extensions, no custom nodes, no setup
### How it compares
| File | Format | Size | Loader needed? |
|---|---|---|---|
| FP16 | safetensors | ~6.5 GB | None (any UI) |
| FP8 Scaled | safetensors | ~3.2 GB | None (any UI) |
| Q8_0 GGUF | gguf | ~3.9 GB | ComfyUI-GGUF / my Forge extension |
| Q4_0 GGUF | gguf | ~2.7 GB | Same as above |
FP8 Scaled vs Q8_0 GGUF β similar quality, FP8 is slightly smaller AND loads natively. Pick FP8 if you want zero install friction; pick Q8_0 if you already have the GGUF loaders set up and prefer the unified GGUF workflow.
### Quality
The "Scaled" in FP8 Scaled means each tensor carries its own scaling factor that preserves dynamic range β so unlike vanilla FP8, this format is essentially indistinguishable from FP16 in normal generation. If you can spot the difference in a blind side-by-side, you have a sharper eye than most.
### How to use it
1. Drop the file into <your-ui>/models/Stable-diffusion/
2. Refresh the checkpoint dropdown (or restart the UI)
3. Pick it. Generate. Done.
No extensions. No custom nodes. No external CLIP/VAE module pickers β CLIP-L, CLIP-G, and VAE are all bundled in this single file.
### Recommended settings
- Sampler: Euler a or DPM++ 2M
- Scheduler: Karras
- Steps: 25-30
- CFG: 5-7
- Resolution: 1024Γ1024 (or any SDXL-native size: 832Γ1216, 1216Γ832)
### Why FP8 Scaled?
It's the friction-free quality variant. You get ~99% of FP16's output quality, ~half the file size, ~half the VRAM footprint, and zero extra setup. For mid-range cards that can technically run FP16 but find it tight, FP8 Scaled is the sweet spot.
### Want maximum quality / minimum size?
The full lineup is across the version pages on this model:
- FP16 / BF16 β for 24 GB+ cards
- FP8 Scaled β you are here
- Q8 / Q4 GGUF β for low-VRAM workflows (separate page)
- DMD2 merge β fastest path (4 steps, LCM, CFG 1.2)
### Support our work
If V6.5 helped you, support the project:
- Patreon: Patreon
- Discord (Brandulate Server): https://discord.gg/DZEenb5wGc

