I'm doing my best. It's okay if you don't like this model. I'm always doing my best! Just because I'm a beginner, I won't blindly shrink back.Description
저장용 다른분들 좋은 모델들이 너무 많아서 그냥 이건 저장용으로 올려둡니다 손 때문에 약하게 입혔습니다
FAQ
Comments (7)
finally a good anime model for flux
can fp8 be lora fine tuned? i want to add my own art
Is this model a continuation of lyh_anime_Flux?
I got this message if I try to use it (other flux checkpoints work)
Loading Model: {'checkpoint_info': {'filename': '/home/user/ai/stable-diffusion-webui-forge/models/Stable-diffusion/Flux_Anime_lyhAnime_korIl01.safetensors', 'hash': 'a759d5d7'}, 'additional_modules': [], 'unet_storage_dtype': None} StateDict Keys: {'transformer': 780, 'vae': 0, 'ignore': 0} Traceback (most recent call last): File "/home/user/ai/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 30, in work self.result = self.func(*self.args, **self.kwargs) File "/home/user/ai/stable-diffusion-webui-forge/modules/txt2img.py", line 131, in txt2img_function processed = processing.process_images(p) File "/home/user/ai/stable-diffusion-webui-forge/modules/processing.py", line 836, in process_images manage_model_and_prompt_cache(p) File "/home/user/ai/stable-diffusion-webui-forge/modules/processing.py", line 804, in manage_model_and_prompt_cache p.sd_model, just_reloaded = forge_model_reload() File "/home/user/ai/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/user/ai/stable-diffusion-webui-forge/modules/sd_models.py", line 504, in forge_model_reload sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_dicts) File "/home/user/ai/stable-diffusion-webui-forge/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/user/ai/stable-diffusion-webui-forge/backend/loader.py", line 498, in forge_loader component = load_huggingface_component(estimated_config, component_name, lib_name, cls_name, local_path, component_sd) File "/home/user/ai/stable-diffusion-webui-forge/backend/loader.py", line 62, in load_huggingface_component assert isinstance(state_dict, dict) and len(state_dict) > 16, 'You do not have CLIP state dict!' AssertionError: You do not have CLIP state dict! You do not have CLIP state dict!For Forge (or Easy Diffusion beta with the Forge backend), you need to select in the VAE /Text Encoder box 3 things: clip, VAE, and text encoder. These are the names of the ones I used: clip_I.safetensors, diffusion_pytorch_model.safetensors, and t5-v1_1-xxl-encoder-Q5_k_s.gguf.
Hi, Any plan for a GGUF Lower Quant? The VRAM requirement is a bit high for us local runners. And new GPUs are also 8GB thanks to Nvidia.
The license says that the outputs can be used commercially. Is this correct?
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.












