CivArchive
    test_V1 SDXL - v2
    NSFW
    Preview 122042607
    Preview 122043431
    Preview 122043939
    Preview 122043951
    Preview 122043954
    Preview 122045384
    Preview 122045386
    Preview 122045385
    Preview 122046780
    Preview 122046783
    Preview 122046779
    Preview 122046781
    Preview 122046782
    Preview 122046778
    Preview 122048721
    Preview 122048725
    Preview 122048723
    Preview 122048720
    Preview 122048724
    Preview 122048722

    Description

    This is TestV1 with a Clip fix as a base, merged with my project model at 0.35

    FAQ

    Comments (16)

    AFD_0Feb 24, 2026· 1 reaction
    CivitAI

    @jrrytng467 Nice job with v2! Still seems to retain the image quality and realism and "look" of v1 (for the most part) while improving prompt/LoRA adherence. Oddly, it really improved the likeness of a character LoRA that was completely sub-par in v1, but also slightly decreased the likeness and image quality of a different LoRA. Strange, but overall, that makes it more versatile imo (a big improvement is worth a slight regression elsewhere). Definitely keeping v1, but think I'll be keeping this one as well!

    Finally realized today that the faceSmash model you mentioned starting with was Reality BoundXL faceSmash v8.0-reuploaded. I've had that model pinned in my browser for several months (since v1), but never got around to trying it. If it's anything like test_v1 (or v2), then I'm sure I'll enjoy it. Kinda tempted to try soulGrip v10 first, though. Still gotta try your ChasingMyVision model too!

    jrrytng467
    Author
    Feb 24, 2026· 1 reaction

    Thank you for the input. I was really pleased with the test model and this new version was I trying to see if I could repair the clip issue so it could be used in further projects. I just learned of this tool and Ive been using it now when merging thinking it might help with a clean merge.(what do I know) haha.

    I really enjoy the work the author of "Reality BoundXL faceSmash v8.0-reuploaded" does. I think we kind of have a similar vision of where we want to go with this, and he is the one that helped point me in this direction.

    My newest model ChasingMyVision has his model "Kissing" all over it. I first had an issue with it because it was a BF16 and when I was merging with it the files where getting huge, plus little ny little I was starting to have issues with either on the front end or back-end of my model.

    I also went in with the concern that the clip issue might have carried over from the "faceSmash" model. So many things went into this process but in the end, the tool I used I was able to turn it back to a fg16, prune it down and remove any junk files.

    His models carry a strong direction of reading prompts which is why I use his models, his new version "Soul", I see a lot of similarities in it and I asked him how it came about. I think because we think a lot alike.

    Anyways, thank you for testing my models, and you input, I do value it and helps me going forward. I do recommend trying his work, I think he deserves a lot of attention.

    AFD_0Feb 24, 2026· 1 reaction

    @jrrytng467  Hopefully I don't have any issues with ChasingMyVision being an odd size. Can get standard ~6GB FP16/BF16 and ~12GB FP32 to work just fine, but anything further compressed seems to have issues loading for me (mostly ~4GB models and I think maybe one or two ~8GB models). I'll give it a try after I'm done playing with test_ v2, which really is quite good and very enjoyable to use. Do appreciate you continuing to share your works!

    jrrytng467
    Author
    Feb 24, 2026· 1 reaction

    @AFD_0 Please let me know, I haven't thought about that part of it

    AFD_0Feb 24, 2026· 1 reaction

    @jrrytng467  I'll let you know! And even if it is an issue, it's kinda more of a "me" problem for using older software. As long as it works in CivitAI, ComfyUI and Forged, then it's probably fine for 99% of most people. But other than the ~4GB compressed models, I've only seen it a few times with any FP16/BF16 and FP32 SDXL-based models (Pony, Illu, etc). And dang, did I mention that test_ v2 is really good? XD

    jrrytng467
    Author
    Feb 24, 2026· 1 reaction

    @AFD_0 when you can see it, look at the images I just posted on my new model of Golden Retrievers. Blows me away. I love doing this

    AFD_0Feb 24, 2026

    @jrrytng467  That's crazy good! Just curious, but what model did you use to make the Toyota MR2? Outside of using a LoRA for that specific car, the likeness is quite remarkable without needing much prompting,

    jrrytng467
    Author
    Feb 24, 2026· 1 reaction

    @AFD_0 Ive actually used both models for testing. I don't use LoRAS unless I want to bring something out in a character, I have merged LoRAS in but that's it. I don't know if you know MR2's but its using the front of the MK2 and the back of the MK1 kinda which I didn't specify

    jrrytng467
    Author
    Feb 24, 2026· 1 reaction

    @AFD_0 No LoRAS, both models. I have a 93' MR2 so I just add "wide body" to it.

    AFD_0Feb 24, 2026

    @jrrytng467 I'm about 90% familiar with the MR2. Drove a black Evo X for over a decade, so I also have some appreciation of beautiful Japanese sports cars. When I first saw your MR2 it somewhat reminded me of a Celica in the front and maybe a bit of Supra, but I think that's just an overall theme of Toyota's styling at the time. Anyway, it's an impressive likeness, especially for baked-in model knowledge.

    jrrytng467
    Author
    Feb 25, 2026

    @AFD_0 earlier I said something about not using Loras, you might see them in my prompts but they aren't loaded. You're just seeing a prompt I have copied and pasted. I do that a lot just to compare which also spots how much my model follows the prompts. Plus it's lazy on my part

    AFD_0Feb 25, 2026

    @jrrytng467  That's useful info, thanks! Think it's always good practice for a creator to test their model with some LoRAs for adherence (ie, pick some character LoRA and see if it actually picks up the character's likeness or if it instead, starts pulling other unwanted elements like the background, lighting, source quality or style). There are so many models, even incredibly popular ones, that simply can't replicate a character from a LoRA at all or very badly, which is an important function imo (and one that your models seem to handle much better than most, which I truly appreciate).

    But yes, for a model's gallery samples, those should be generated bare without any added influence from LoRAs or embeddings (other than maybe DMD2 acceleration if the model was intended to be used with that). Leaving a reference to such in the prompt isn't a big deal imo, as long as it wasn't being used to make the model seem better or more capable than it is own its own. I'm going to try testing ChasingMyVision tonight if I get a chance. So many good models and so little time! And thank you for the tip you gave the other day, do appreciate that, but being able to play around with test_ v1 and v2 is more than enough reward, since they are a one of very few models that are very close to what I've been looking for - a high level of realism without imposing an unnecessary style, high level of image quality, excellent prompt/LoRA adherence at normal 4-6 CFG and 0.5-1.0 strengths, highly realistic backgrounds and skin/fabric textures, and the ability to still produce great results with very long/detailed prompts by cranking up the steps to match the length/verbosity.

    Think the only issue I've had with test_ v2 versus v1 is that it often has a tendency to create nude/NSFW results when unintended (not always, but quite often). Using "nudity" or "nipples" or "bare breasts"in the negative prompt should fix this, but for some reason it usually doesn't (and using SFW or rating_safe or fully-clothed as a positive prompt doesn't really help either). Honestly not sure exactly what's causing this, as I removed any/all "suggestive" prompts and I still get topless results. Think it's maybe pulling the concepts from the very small amount of naked/nudity in a LoRA I made (should be >90% fully clothed), but I'm not sure if that's it either. Not really a bad problem to have I guess, but just haven't figured out exactly how to work around it yet. So yeah, not really a complaint or anything, just an observation.

    jrrytng467
    Author
    Feb 25, 2026· 1 reaction

    @AFD_0 that NSFW lean comes from the creator of my base model. It's a very strong model. I have a feel you are going to love my latest model. I find using DPMtt2M SDE ? And higher resolution creates a much better image. Anyway, let me know what you think, also much of what you say using terms I am clueless to those. I Haven't been doing this long so there is still so much I don't understand. I do truly appreciate you. Thank you

    AFD_0Feb 25, 2026

    @jrrytng467  I used to only use DPM++2M SDE, but now mostly just use Euler Ancestral instead. DPM performs double steps, so it's much faster but often has a "crunchy" image quality. Euler A provides similar results (at twice the steps) but is usually much smoother. Give it a try sometime! If you're getting good results with say 5 CFG and 40 steps with DPM, then keep the same CFG and use 80 steps with Euler A and it should look a bit more natural.

    Sometimes it's worth the extra time, sometimes it's not. If not, then SDE has the best image quality out of the DPM schedulers imo. With my very old machine, it usually takes me 6 minutes or more just to make one image!

    jrrytng467
    Author
    Feb 26, 2026

    @AFD_0 I rebuilt my system because it would take a long time. I had a 2060 Super which is still a good card but on my rebuild I bought a new card and put it in which is the NVIDIA RTX 4060ti 16GB, the 2060 was 4GB which was such a big difference just with the card.

    I have a question about something brought to my attention. A person started out with like a gotcha moment saying he knows I'm coping and pasting prompts. I was like yeah, about 90% of the time. He seemed more offended I used his prompt without giving him credit. That was true I haven't been on anyone's. I comment on images or a model but not on prompts. I didn't know that would bother people. I guess I can kinda understand so going forward I will try and start doing that. But is that a big thing with most people? And he said something about a bounty is what brought it to his attention? Whats that about. I guess I have a lot to learn. If you notice I do something that might not seem right, please point it out to me. I'm not here to cause trouble. Everyone works hard on there models.

    AFD_0Feb 26, 2026· 1 reaction

    @jrrytng467  Yeah, people can get kinda weird about their prompts sometimes. I totally understand that someone would want credit/recognition for someone else using their prompt, but unless you're really good with keeping notes and/or neatly labeling images, it seems like a difficult thing to do. Personally, I do not like sharing my prompts unless it's for a video or something really basic/simple, but CivitAI allows the option of hiding them, so I figure that if I make something intentionally public, I'm giving up all rights to that prompt and anyone's free to use or modify it however they wish, with or without giving me credit. Imo, anyone should be able to just look at my images and easily come up with something kind of similar on their own, but studying different prompting techniques is definitely useful for coming up with different/better ways of doing things.

    I've never requested a bounty, but I think it involves asking everyone for a particular thing (concept, image, model, etc) and paying for whoever best fulfills that request. I'll often suggest ideas to different LoRA creators about things that full checkpoints have trouble doing that I haven't seen done with great success, but otherwise, I'd try to just make it myself. Honestly don't know all the etiquette and everything else myself, but I'll let you know if something doesn't seem right. Most people around here seem really nice and aren't trying to cause any trouble. About the only thing I'd suggest is maybe giving credit for any other models used somewhere in the description of your model page. Other than just being polite, it's actually kind of helpful for people to better understand how a model might operate. I've done a ton of very simple merges myself and after a certain point, there's so many different things mixed in that it's really difficult to keep track of everything and I even type up notes and formulas for stuff like that.

    And my current build is circa 2013, which I upgraded from a 970 to an 11GB RTX 2080 Ti I bought for $400 a few years ago. Bought a new laptop last year, but have been wanting to do a completely new PC build for this year. Really not sure if that's going to happen, since the $300 RAM kit I want is now $1,400. Plus, I'd really like to get a new GPU with at least 24GB or more of VRAM (for training and video), so that's either a $2,000 5090 FE (when available) or a $1,400 Pro 4000 Blackwell. Add in some 4TB SSDs and that price becomes a hard pill to swallow for something I'm doing purely for fun/hobby/learning.

    Checkpoint
    SDXL 1.0

    Details

    Downloads
    290
    Platform
    CivitAI
    Platform Status
    Available
    Created
    2/22/2026
    Updated
    5/1/2026
    Deleted
    -

    Files

    testV1SDXL_v2.safetensors

    Mirrors

    CivitAI (1 mirrors)