It's a TRIAL version of SDXL training model, I really don't have so much time for it. You can download it and do a finetune
Description
The first version, some times you need the "anime" tag to trigger it. I usually don't use embeddings.
FAQ
Comments (7)
Hi, normal SDXL should use Refiner on img2img. What to use for the img2img for this model?
I haven't worked on img2img yet, for all of this pictures, only txt2img of 1024*1535, no highres or img2img. It's a trial version. Further work will be done when I'm available.
Generally using the base model works better for me. AKA, regular hires fix.
Refiner shouldn't really be used in img2img (unless you are gonna stop generation midway, and then send it to img2img), at least it isn't the most optimal way and may produce worse results instead
The refiner's job is to act as a denoiser, and as @munchkin said, it expects a noisy image. It should really be passed from one sampler to another as a latent. A good workflow for ComfyUI is https://github.com/SytanSD/Sytan-SDXL-ComfyUI. To answer your specific question, you don't use Loras with the refiner, or need special models for the refiner as far as I'm aware. Only the base image seems to need the modifications.
One addtional explaination based on @munchkin and @kilos .
For the performance: According to the SDXL page, using the base+refiner is 26.2% and base only is 22.7%. It should not really be a super big deal compared with 4.63% of SD1.5 it claims.
But it brings more requirements (troubles) can be expected. You need a finetuned base, and a finetuned refiner as well. Also, you should change the workflow. Then you may get this 22.7%->26.2% improvement not for sure.
Im sorry to say i had very bad results with this, this models always end up doing the hell he wants and ignoring the prompts in every single UI i tried.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.












