Trained on images of artists whose artwork has an ink brush or heavy lineart style. Example images have very minimal editing/cleanup.
Reproducibility
If you want to reproduce the images below, I've prepared a more detailed version of the prompt settings and seed values in this Github Gist: https://gist.github.com/kudou-reira/eadf52cef156eb566cff886221823748
Stable Diffusion Workflow Videos
ControlNet 1.1 for Character Design | Part 2
Creating a PNGtuber with Character LoRA
Training LyCORIS (LoCon/LoHa) for Style | LoRA Part 2
ControlNet for Character Design | Part 1
Training LoRA for Style | LoRA Part 1
Creating Stylized Vector Logo Characters
How I train DreamBooth for Style:
Model Demonstration:
Example settings: Steps: 50, Sampler: Euler, CFG scale: 7, Size: 512x512
Example prompt: DBlinebrush style, masterpiece, 1girl, beautiful portrait of an anime female adventurer, monochrome
Description
Trained on more human images. It's a more manga-y style with an emphasis on detailing through lines. It works well as an in-painting model (after merging with the 1.5 in-painting model) to bring dreamy or blurry forms into focus.
Updated description (1/12/23):
Sorry for the confusion. Perhaps I wasn't clear enough in the description. The linebrush version v2-1 is used as one part of the merge to create a 1.5 inpainting model version of the linebrush model. Then, I used the new inpainting version of the linebrush v2-1 model to turn the more dreamy or blurry outputs from my other model (https://civitai.com/models/3164/fantasy-style) into the preview images. I used a low conditional mask strength (around 0 - 0.1) to preserve the details while keeping the denoising strength relatively high (0.75 - 1)
The base images I used for img2img with the inpainting linebrush model are here: https://imgur.com/gallery/C0EChZz. You can create an inpainting model from any 1.5 finetuned model following the resource here: https://www.reddit.com/r/StableDiffusion/comments/zcby0o/you_can_now_merge_inpainting_and_regular_models/. I also have a video about it here starting at timestamp 01:43: https://youtu.be/aNewHiib4Nk?t=103.
FAQ
Comments (6)
Can you post safetensor?
When I have time, I'll try. I'm currently traveling, so I have a really slow upload speed.
Safetensor version is uploaded. It's in the process of verification. Thank you for your patience!
yo I appreciate it man, I'm dling now
Hi, Do you have anything else effecting your enviroment? Do you have a Clip Skip setting or a VAE? I like your preview images, but I can't replicate anything close to that. I did download the bad-artist.pt and put it into the embeddings folder (is that supposed to be a .pt file? All other embeddings are .bin files).
Here Is what I get: https://i.imgur.com/YOAQZw0.png
DBlinebrush style, masterpiece, 1girl, beautiful portrait of anime female adventurer
Negative prompt: (hands), (out of frame), illustration by bad-artist, 3d, render, doll, plastic, monochrome, (saturated:1.5), (high contrast), (hair covering face:1.5), (cross-eyed: 1.5), (obscured face:1.5), (obscured neck:1.5), (((multiple people))), (multiple heads), asymmetrical eyes, ((((ugly)))), (((duplicate))), (((mutation))), ((morbid)), ((mutilated)), medium breasts, (((large breasts))), extra fingers, mutated hands, (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))
Steps: 50, Sampler: Euler, CFG scale: 7, Seed: 3716395057, Size: 512x512, Model hash: 1400e684
Used embeddings: bad-artist [2a38]
Sorry for the confusion. Perhaps I wasn't clear enough in the description. The linebrush version v2-1 is used as one part of the merge to create a 1.5 inpainting model version of the linebrush model. Then, I used the new inpainting version of the linebrush v2-1 model to turn the more dreamy or blurry outputs from my other model ([https://civitai.com/models/3164/fantasy-style](https://civitai.com/models/3164/fantasy-style)) into the preview images. I used a low conditional mask strength (around 0 - 0.1) to preserve the details while keeping the denoising strength relatively high (0.75 - 1)
The base images I used for img2img with the inpainting linebrush model are here: https://imgur.com/gallery/C0EChZz. You can create an inpainting model from any 1.5 finetuned model following the resource here: https://www.reddit.com/r/StableDiffusion/comments/zcby0o/you_can_now_merge_inpainting_and_regular_models/. I also have a video about it here starting at timestamp 01:43: https://youtu.be/aNewHiib4Nk?t=103.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.



