You can also run this model on Magespace and Sinkin.ai! https://www.mage.space/
V5 - https://www.mage.space/play/9200bfc93333da71123999f3550aabaa
V7 - https://www.mage.space/play/0a2a61c6d4a668ef51f552df0231067d
https://sinkin.ai/m/RR7Vrj4
Want to send some support? (Buy a cup at Ko-fi)
FYI VAE IS BAKED IN
Version 3 has https://huggingface.co/madebyollin/sdxl-vae-fp16-fix baked in.
RealCartoon-XL is an attempt to get some nice images from the newer SDXL. It still is a bit soft on some of the images, but I enjoy mixing and trying to get the checkpoint to do well on anything asked of it. This model level is definitely pushing my computer, so takes a bit longer to actually get it things tested and mixed :)
The mission is the same of the SD1.5 versions. Which is to get a checkpoint to pretty much do well on prompts while looking good with good variety
I hope you all enjoy it! Please review and share your images. I very much appreciate the support with the downloads and feedback (THANK YOU ALL). Never thought it would get this much attention.
The Process:
The starting checkpoints for merging were a couple of top ones of course (The checkpoints do/did not have restrictions on checkpoint mergers. Some of these are in my recommended resources as well). I also baked in the VAE (sdxl_vae.safetensors).
Currently this checkpoint is at its beginnings, so it may take a bit of time before it starts to really shine. I will continue to update it as time progresses...but I do hope you all enjoy it as it develops.
The Settings:
Still figuring out SDXL, but here is what I have been using:
Width: 1216 (normally would not adjust unless I flipped the height and width)
Height: 832
Sampling Method: "Eular A" and "DPM++ 2M Karras" are favorites.
Sampling steps: 30 - 55 normally (30 being my starting point, but going up to 55 and even 60 a lot of the time)
Hires.fix settings: Upscaler (R-ESRGAN 4x+, 4k-UltraSharp most of the time), Hires Steps (10), Denoising Str (0.34 - 0.45 normally), Upscale (1.5 or 2 does well)
Clip Skip: 2
Some settings I run on the web-Ui to help get the images without crashing:
set COMMANDLINE_ARGS=--medvram --no-half-vae --opt-sdp-attention
__________________________________________________________________________________________________
License & Use
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content.
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license.
3. You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the modified CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully).
Please read the full license here Stable Diffusion
Use Restrictions:
You agree not to use the Model or Derivatives of the Model:
- In any way that violates any applicable national, federal, state, local or international law or regulation
- For the purpose of exploiting, harming or attempting to exploit or harm minors in any way
- To generate or disseminate verifiably false information and/or content with the purpose of harming others
- To generate or disseminate personal identifiable information that can be used to harm an individual
- To defame, disparage or otherwise harass others
- For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation
- For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics
- To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm
- For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories
- To provide medical advice and medical results interpretation
- To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).
Terms of use:
- You are solely responsible for any legal liability resulting from unethical use of this model(s)
- If you use any of these models for merging, please state what steps you took to do so and clearly indicate where modifications have been made.
Note:
If you see any conflicts or corrections to be made, please let me know.
Description
It has been awhile. After some time and some RAM updates; finally was able to get SDXL running smoother. Thus, was able to work on the XL version of RealCartoon. I think it is starting to get the overall look I am wanting. It responds rather well; and comes out a bit clearer. The interesting part about a SDXL checkpoints is that they have a bit more range...so it takes a bit of getting used too. Overall though, I liked the outputs from the common prompts I use on SD1.5. I hope you all enjoy!!
FAQ
Comments (39)
REALCARTOONXL UPDATE!! Hell yeah.
would you still recommend this for the hires?
"Hires.fix settings: Upscaler (R-ESRGAN 4x+, 4k-UltraSharp most of the time), Hires Steps (10), Denoising Str (0.34 - 0.45 normally), Upscale (1.5 or 2 does well)"
just wondering if you have any new rec to go with the update. Thanks!
I actually have been running them with Hires on. Will update the settings...been messing around. :)
You have this tagged as a 1.5 Base... pretty sure it's actually an XL one as it says though... might want to change that so folks can find it.
Looks great... can't wait to try this over the weekend when I have my big rig running.
...I did it again....thank you for letting me know. :)
Ruh oh, is v5 of RealCartoonXL supposed to load as sd1.5? That's what it's trying to force itself to do. Unless it IS sd1.5!
I thought I selected the correct file type.....I did not :) Thank you for letting me know though :)
@7whitefire7 Ha, thanks! It's one of my fav models, excited to try the new one!
Will there be a pruned version? Or is the pruned version of the model 12 GB?
Thank you for this great model.
Pruned getting uploaded now.
It is also SDXL....I had the wrong file type selected :/
I'm new here. Since I discovered this model, I can't seem to be interested in other models :)
However, there's a subject I'm curious about. How can I use this model with ControlNet v1.1.410 models? For example: control_v11p_sd15_openpose.
"RuntimeError: mat1 and mat2 shapes cannot be multiplied"
Hmm....I have the same version and I attempted both a openpose and tried a depth....it did not work; but I have not done controlnet with SDXL yet....so I think I am lacking the correct ControlNet Tool. Will see if I can figure that out....and if I found out before you will let you know :) ....but it you find it, let me know :)
@emrekesler @7whitefire7
You have to use dedicated XL ControlNet models, this one mention in its name it's for SD 1.5 only.
Matrices cannot be multiplied basically means mismatch between model size (SD 1.5 is 512, XL is 1024) and control net model.
See these (warning: large AND lots of files):
https://huggingface.co/collections/diffusers/sdxl-controlnets-64f9c35846f3f06f5abe351f
https://huggingface.co/lllyasviel/sd_control_collection/tree/main
A how-to, if needed: https://stable-diffusion-art.com/controlnet/
You can git pull these in one pass, but you have to run git lfs install first.
(LFS == large file support)
Once pulled, copy into control net models directory. For A1111 it's in the models/ControlNet subdirectory.
Extra notes:
Diffusers_full_sdxl tends to do weird things to colors sometimes (no idea how to avoid), Kohya's limit this a bit but control effect is weaker. If anyone here is smarter than me...
Lineart model for XL is buggy, you'll only get "neon" crappy outlines, use canny instead. I have used it before, pretty good doing.
Strictly personal but i disagree with some saying you can't use the Pixel Perfect option with XL Controlnets... give it a shot. No issue so far.
Hope this helps !
You need an SDXL controlnet model if you're using an SDXL checkpoint like this one. You can tell that the one you're using is a 1.5, because it has "sd15" in the name. SDXL controlnet models will usually have either "sdxl" or "xl" in the name.
This is a good Open Pose model for SDXL (though there are others, you can look for them online): https://huggingface.co/thibaud/controlnet-openpose-sdxl-1.0/tree/main
(At that link, you'd want the control-lora-openposeXL2-rank256.safetensors file.)
@ze_thriller Ty
@minstrel Ty!
@ze_thriller @minstrel Thank you so much for the detailed response!
amazing work, v_5 is awesome.
Don't work :( Please help, I use InvokeAI (with others checkpoint realcartoon this not happen)
[2023-11-27 21:49:05,881]::[uvicorn.access]::INFO --> 127.0.0.1:57754 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200
[2023-11-27 21:49:05,888]::[uvicorn.access]::INFO --> 127.0.0.1:57754 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2023-11-27 21:49:05,994]::[InvokeAI]::INFO --> Converting C:\InvokeAI-v3.4.0post2\models\any\main\realcartoonXL_v5.safetensors to diffusers format
[2023-11-27 21:49:06,238]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 104, in __process
outputs = invocation.invoke_internal(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 591, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 315, in invoke
c1, c1_pooled, ec1 = self.run_clip_compel(
^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 172, in run_clip_compel
tokenizer_info = context.services.model_manager.get_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\services\model_manager\model_manager_default.py", line 112, in get_model
model_info = self.mgr.get_model(
^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\model_manager.py", line 490, in get_model
model_path = model_class.convert_if_required(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\models\sdxl.py", line 122, in convert_if_required
return convertckpt_and_cache(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\models\stable_diffusion.py", line 289, in convertckpt_and_cache
convert_ckpt_to_diffusers(
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1728, in convert_ckpt_to_diffusers
pipe = download_from_original_stable_diffusion_ckpt(checkpoint_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1406, in download_from_original_stable_diffusion_ckpt
set_module_tensor_to_device(unet, param_name, "cpu", value=param)
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\accelerate\utils\modeling.py", line 285, in set_module_tensor_to_device
raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.
[2023-11-27 21:49:06,242]::[InvokeAI]::ERROR --> Error while invoking:
Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.
Not as familiar with InvokeAi, but this checkpoint is SDXL 1.0 so uses a lot of VRAM. That error seems to appear when that is an issue. It seems A1111 or ComfyUI may not have the same issue. When you try other checkpoints…are they SDXL 1.0 as well?
@7whitefire7 SDXL 1.0 Standard run well. Would you recommend any checkpoint similar to this one to reproduce the test in the most suitable way possible, to see if the error can be replicated? Thanks
@ppm4587 Did you try the version prior to 5? Does it happen with all the versions?
@7whitefire7 the same error in V4
[2023-11-28 17:56:47,216]::[uvicorn.access]::INFO --> 127.0.0.1:62571 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200
[2023-11-28 17:56:47,224]::[uvicorn.access]::INFO --> 127.0.0.1:62571 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2023-11-28 17:56:47,383]::[InvokeAI]::INFO --> Converting C:\InvokeAI-v3.4.0post2\models\any\main\realcartoonXL_v4.safetensors to diffusers format
[2023-11-28 17:56:47,751]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 104, in __process
outputs = invocation.invoke_internal(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 591, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 315, in invoke
c1, c1_pooled, ec1 = self.run_clip_compel(
^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 172, in run_clip_compel
tokenizer_info = context.services.model_manager.get_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\services\model_manager\model_manager_default.py", line 112, in get_model
model_info = self.mgr.get_model(
^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\model_manager.py", line 490, in get_model
model_path = model_class.convert_if_required(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\models\sdxl.py", line 122, in convert_if_required
return convertckpt_and_cache(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\models\stable_diffusion.py", line 289, in convertckpt_and_cache
convert_ckpt_to_diffusers(
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1728, in convert_ckpt_to_diffusers
pipe = download_from_original_stable_diffusion_ckpt(checkpoint_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1406, in download_from_original_stable_diffusion_ckpt
set_module_tensor_to_device(unet, param_name, "cpu", value=param)
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\accelerate\utils\modeling.py", line 285, in set_module_tensor_to_device
raise ValueError(
ValueError: Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.
[2023-11-28 17:56:47,757]::[InvokeAI]::ERROR --> Error while invoking:
Trying to set a tensor of shape torch.Size([640, 640]) in "weight" (which has shape torch.Size([640, 640, 1, 1])), this look incorrect.
---------------------------------------------
May be, this is interesting, in V5 if I do a simple instalation (I ever do a Advance) the error change
[2023-11-28 18:04:45,162]::[uvicorn.access]::INFO --> 127.0.0.1:62611 - "POST /api/v1/queue/default/enqueue_batch HTTP/1.1" 200
[2023-11-28 18:04:45,169]::[uvicorn.access]::INFO --> 127.0.0.1:62611 - "GET /api/v1/queue/default/status HTTP/1.1" 200
[2023-11-28 18:04:45,295]::[InvokeAI]::INFO --> Converting C:\InvokeAI-v3.4.0post2\models\any\main\realcartoonXL_v5.safetensors to diffusers format
[2023-11-28 18:05:19,282]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\torch\serialization.py", line 619, in save
save(obj, openedzipfile, pickle_module, pickle_protocol, disablebyteorder_record)
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\torch\serialization.py", line 853, in _save
zip_file.write_record(name, storage.data_ptr(), num_bytes)
RuntimeError: [enforce fail at inline_container.cc:593] . PytorchStreamWriter failed writing file data/0: file write failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 104, in __process
outputs = invocation.invoke_internal(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 591, in invoke_internal
output = self.invoke(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 315, in invoke
c1, c1_pooled, ec1 = self.run_clip_compel(
^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\invocations\compel.py", line 172, in run_clip_compel
tokenizer_info = context.services.model_manager.get_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\app\services\model_manager\model_manager_default.py", line 112, in get_model
model_info = self.mgr.get_model(
^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\model_manager.py", line 490, in get_model
model_path = model_class.convert_if_required(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\models\sdxl.py", line 122, in convert_if_required
return convertckpt_and_cache(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\models\stable_diffusion.py", line 289, in convertckpt_and_cache
convert_ckpt_to_diffusers(
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\invokeai\backend\model_management\convert_ckpt_to_diffusers.py", line 1730, in convert_ckpt_to_diffusers
pipe.save_pretrained(
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 696, in save_pretrained
save_method(os.path.join(save_directory, pipeline_component_name), **save_kwargs)
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\transformers\modeling_utils.py", line 2189, in save_pretrained
save_function(shard, os.path.join(save_directory, shard_file))
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\torch\serialization.py", line 618, in save
with openzipfile_writer(f) as opened_zipfile:
File "C:\InvokeAI-v3.4.0post2\.venv\Lib\site-packages\torch\serialization.py", line 466, in exit
self.file_like.write_end_of_file()
RuntimeError: [enforce fail at inline_container.cc:424] . unexpected pos 25728 vs 25622
[2023-11-28 18:05:19,291]::[InvokeAI]::ERROR --> Error while invoking:
[enforce fail at inline_container.cc:424] . unexpected pos 25728 vs 25622
[2023-11-28 18:08:20,705]::[uvicorn.access]::INFO --> 127.0.0.1:62621 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&base_models=sdxl-refiner&model_type=main HTTP/1.1" 200
[2023-11-28 18:08:20,706]::[uvicorn.access]::INFO --> 127.0.0.1:62622 - "GET /api/v1/models/?base_models=sd-1&base_models=sd-2&base_models=sdxl&base_models=sdxl-refiner&model_type=onnx HTTP/1.1" 200
@ppm4587 I will see if I can get Invoke Ai installed on my computer and test it. I use A1111 primarily and get no errors. :/ I have not messed with Invoke, so will see if there is something I can figure out. Things that seem like they may be the problem is xformers not being installed....but then I would think you would get the problem on other SDXL checkpoints if that was the case. What other SDXL checkpoints have you tested with Invoke Ai?
@7whitefire7 I tried two XL (ColossusProject-XL & Blue Pencil XL) and I have the same problem. The good part is that it doesn't seem to be a problem with your model. The bad thing is that I will have to see how models based on SDXL are added appropriately.
Thanks for everything
@ppm4587 Well at least you found a commonality. Will give better focus on the figuring out.
@7whitefire7 Finally, in the tool's Discord, they have helped me with your checkpoint (it doesn't work with other XL ones), it was an issue with specifying a specific path. I'm already enjoying the work it has done, and it's incredible. I love this model.
@ppm4587 Glad you were able to get it working!
Excellent quality model, fantastic texture and lighting control and application. Arguably one of the best out-of-the-box general use models.
I get oddball outputs sometimes like 1/100 images is grainy weird colors like a old film negative with no refiner and automatic vae. Do we refiners or do most current gen SDXL models bake it into one?
I normally do not use refiners. It was almost necessary when it first came out; but I have not used them...because I am trying to get the checkpoint to do more on its own....and then allow more creativity to those that refine. :) The VAE is baked in though.
@7whitefire7 Interesting, can you overwrite the vae by telling it a specific one? and automatic is the setting for basemodels with a baked vae right?
@Whiterain1000 You can use other VAEs, but if one baked in….can do some interesting things. I would select none if one already baked in….automatic can still use one.
@7whitefire7 Thank you for all the information.
After playing with several selected XL models, including this one, I felt like I should drop in a few words. I've only played with the v5 of your XL model and it's amazing and versatile. I love its prompt coherence and output quality. I look forward to your future work on this XL model, perhaps with more in-depth training. Thanks for sharing this wonderful model with the community!
Hello,
Can you use XL models on the regular Automatic1111 program? or does something special need to be updated/changed?
I use it on A1111. Just make sure it is updated :)
Doesn't work with lora. This is fine? Is it possible to fix this somehow?
? Are you using SD 1.5 or SDXL Lora. SD 1.5 does not normally work with SDXL.
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.

















