CivArchive
    🎟️ FUSIONCORE | qp - SD15 v0.5 [LEGACY]
    NSFW
    Preview 2475540
    Preview 2475547
    Preview 2475548
    Preview 2475550
    Preview 2475551
    Preview 2475553
    Preview 2475554
    Preview 2475555
    Preview 2475556
    Preview 2475557
    Preview 2475558
    Preview 2475560
    Preview 2475561
    Preview 2475568
    Preview 2475572
    Preview 2475573
    Preview 2475575
    Preview 2475576
    Preview 2475577

    This Model is no longer being updated. Please see my QuadPipe SDXL model which picks up where this left off. I will leave this model available on Civitai for those who still enjoy using SD1.5 - but please don't expect future SD1.5 updates.

    Thanks!
    -qp

    --EOL NOTICE--

    This is definitely pre-beta! ha!
    Oh one note when you look at generated images here: MarvinAI-SynFlow was the development name I had - I changed it before uploading to the Fusioncore name. I'd update the photos here to reflect the new name since the model is identical, but I can't figure out how to edit the photos on the model editing page (I know, right? ... engineer can't figure out engineering. sigh...) Anyway, this is my first shared public model. upload here so I'm learning

    Donations: https://www.buymeacoffee.com/quadpipe

    My background is in product design and engineering. I used to work at MIT and Dreamworks - I've been working with AI for a couple of years, but this year is my first deep dive into model creation. That doesn't make me any better at this than anyone else, but it may help to understand why I'm approaching my process the way I am. Anyway, I love the work you guys are putting together and thought I could add some value, too.

    I constantly notice that regardless of how well rendered many photographic images are, they tend to miss something "human" about them. There's a certain essence to being human that's hard to capture by just throwing great photos at a training regime. So, I focused my training on photos that captured emotions AND photographic principles. I hoped the generations would find an easier path to "getting it" when a scene was described in a prompt. Right now, the prompts are a little heavy, and while it does pretty well with light negative prompting, using things others have tested is a great way to validate what we're doing. So, these prompts in the sample images may be heavier than needed. Actually, they are pretty heavy, but they don't need to be - have fun experimenting and let me know what you learn.

    My images used A LOT of steps, but you don't need to. It was intentional on my part to see how far I needed to go to get a terrible burn-in. I had pretty good success in the 30-50 range, and many of the photos looked great at 512x768 without upscaling. These are mostly upscaled to 2x. I used DDIM often but had solid success with Euler and others. It is important to note that the only reason Epochs and Steps aren't listed is because of how many models were created to be merged together here. Also, there's some secret sauce in that.

    Once I feel like this is a solid 1.0 candidate, I'll try to take what I've learned to an SDXL model - that might be a little while because I'm not sure if I can just upscale what I need or if I have to generate a ton of images again to build the training data.

    So, how did I approach this?

    I created about 18 models focused on elements of photography that I thought were appropriate. For example, I made a model called "contemplation" that tried to capture photos of people in deep thought. I focused on emotions and the essence of what being human is all about.

    After creating my models, I merged them with the SD1.5 base and tested and tweaked - for months. As I went, I would develop new submodels to fix things based on successful images generated. I was trying to break away from inheriting derivative challenges (still working on that), so I only relied on images generated from other models rather than merging directly. Like everyone else, hands can be a challenge, and I'm still working on improving those.

    If you want to see me do more here, I'll do my best. I really appreciate any donations, and it means a lot to me. Thank you. (I'll put this line at the bottom too, it may help my chances a bit)

    Donations: https://www.buymeacoffee.com/quadpipe

    These models inspired me, and I used images generated from them occasionally to build my training data.

    Analog Diffusion by wavymulder

    AbsoluteReality by Lykon

    CyberRealistic by Cyberdelia

    Analog Madness by CornmeisterNL

    A-Zovya Photoreal by Zovya

    epiCRealism by epinikion

    Juggernaut by KandooAI

    Description

    This is the first pre-beta version 0.5

    FAQ

    Comments (14)

    busses0_epicSep 12, 2023
    CivitAI

    Hello it is only showing VAE file thank you

    QuadPipe
    Author
    Sep 12, 2023

    Sorry about that; something went wrong with the server during the upload it seems.

    QuadPipe
    Author
    Sep 12, 2023

    This was 100% my mistake. I uploaded it to the online creator but didn't realize I needed to upload it again to the download-version. This is my first model published, so I'll be here making mistakes all evening. :)

    hollamanSep 13, 2023
    CivitAI

    What's the difference between v0.5 and v 0.5?

    QuadPipe
    Author
    Sep 13, 2023· 5 reactions

    The main difference is that I didn’t know what I was doing when I uploaded and accidentally created a copy.

    hollamanSep 14, 2023· 2 reactions

    @QuadPipe An excellent answer.

    GairmSep 15, 2023· 1 reaction
    CivitAI

    Definitely a model to keep an eye out for, very promising! I'd say it needs a bit more work on hands and especially feet, if you get those two down this model will be a banger. As it stands though it's on the right path in my opinion, and seems flexible! (sometimes a bit too much but nothing rerunning another render doesn't fix)

    Following this one for sure and looking forward to what you can develop it into! :)

    QuadPipe
    Author
    Sep 15, 2023· 1 reaction

    Thanks! I'll be posting a negative TI in the next few days, which should help with that stuff a bit (we'll see) - but so far, it is doing a good job in my tests.

    OlbanetsJan 3, 2024
    CivitAI

    You shouldn't train different models with different concepts and merge then. There are two great articles on medium.com how to train with validation. Look there for "Fine-Tuning Stable Diffusion With Validation" and "The LR Finder Method for Stable Diffusion"

    QuadPipe
    Author
    Feb 4, 2024· 1 reaction

    I just saw this; thanks for your thoughts. While I've worked in AI for a few years, stable diffusion is something I've only been playing with for about a year. I've learned quite a few techniques now in getting models where I'd like them to be, and I'm still learning. I've read those articles, and I've taken a lot of outside research and thoughts into consideration with the training I'm doing now on an update to this model. I'm training my own checkpoints, though I do use generated images frequently in my datasets. When I merge my own models, I'm not always happy, and sometimes I step back a lot.

    Typically, I start with a lot of sub-models (LoRAs) for individual concepts that I want to capture, and I'll take those when I'm happy with them and merge them into my base model using DiffusionBee, which gives me a quick way to test the output. Even though I work in this space, I don't think any of us are experts yet. I know that even in the year or so that I've been training SD models, there have been quite a few approach changes, even to basic areas like the use of TI vs. Lora vs. combo, etc... We're learning; thank you for the links; I appreciate it!

    OlbanetsFeb 4, 2024· 1 reaction

    @QuadPipe any time when we inject a lora into a model we overwrite the model's variety with limited data. if you have resources and time to train model with validation - do it!

    QuadPipe
    Author
    Feb 11, 2024· 1 reaction

    @Olbanets 100% - I use the LoRA Merges for testing before I do a long training session.

    NourdalMar 30, 2025· 1 reaction
    CivitAI

    I really like the models from QuadPipe for their own individual and original look, unlike many other models. And this one is no exception. A great combination of good photorealism, and at the same time wide flexibility and responsiveness. This model is also quite good in artistic styles. Good job! Thanks!

    QuadPipe
    Author
    Mar 30, 2025· 1 reaction

    Thank you so much! I'll keep trying to do something a little different! :)

    Checkpoint
    SD 1.5

    Details

    Downloads
    1,437
    Platform
    CivitAI
    Platform Status
    Available
    Created
    9/12/2023
    Updated
    4/30/2026
    Deleted
    -

    Files

    FUSIONCOREQp_sd15V05LEGACY.safetensors