Huggingface Repo: https://huggingface.co/lex-hue/Fluffity
Use e621 tags (no underscore), Artist tag are very effective (SD/e621 artist)
ClipSkip: Stop at CLIP layers=1 (SD=1, NAI=2) [Settings\Stable Diffusion\Clip skip]
Example Setting
Txt2img Setting
Steps = 15~35
Sampler = Any
CFG scale = 5~25
Negative embeddings = fluffity-neg ※[SD-WebUI\embeddings]
Hires. fix
Hires steps = 15
Denoising strength = 0.45~0.5
Hires upscaler = 4x-UltraMix_Smooth ※[SD-WebUI\models\ESRGAN]
Img2img Resize
Scale to = 1.5~2.0
Denoising strength = 0.45~0.5
ControlNet0 = softedge_hed, control_v11p_sd15_softedge
ControlNet0 Weight = 0.4~0.5
ControlNet0 Pixel Perfect = true
ControlNet1 = Reference
ControlNet1 Weight = 0.2~0.3
ControlNet1 Pixel Perfect = true
SD Setting
Stop at CLIP layers = 1
Eta noise seed delta = 0
Txt2img Animated Diff
Steps = 20~50
Sampler = Any
CFG = 7~25
ImageSize = 512~768
Number of frames = 8~16
Description
More Training
Has an bigger Knowledge and it started to copy some artist art-styles.
The artstyles I discovered?:
inno-sjoa (use by {artist name})
jay naylor
vader-san
shirokoma
No? Then what about these?:
Dalle (use dalle style with 1.3 -1.8 emphasis)
midjourney
Well, when i discovered them they didnt need help. so, to get some great results, use one artist/style at first and then multiple. But for the Dalle/midjouney style, I did train it for that.. (sorry, but we want free stuff, lol)
And so many stuff..
;-; i just need to make an documentation on how to use it.. right?
FAQ
Details
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.