Night Sky YOZORA Model ——“For ultimate image quality and large image size(>1536 x 1024)”
Trained by @YozoRaAru
如果你需要更好的色彩表现,我推荐你试一下Color Box
If you want better colour representation There is a new style model Color Box
YOZORA is a model that pursues perfection and is filled with personal preferences. I trained it using images that I like, which give it an unparalleled level of detail and completion. It can provide you with very exquisite character images.
In addition to training, I also blended it with dozens of other high-quality models that I personally like, and I tried layer blending as well. Among these models, I selected NovelAI's original model as the major weight because it can give it excellent adaptability to a variety of prompts. I haven't tried testing it with landscape + character yet, but I believe it can also perform excellently in that aspect.
For the best results, please ensure that your final resolution > 1536x1024. This is necessary for generating exquisite illustrations
Here are some recommended settings for the parameters:
Clip skip: It is recommended to set it at no less than 2. YOZORA already has a rich level of detail, and setting it at 1 may make the image appear cluttered and confusing.
Resolution: YOZORA may not be suitable for generating small images because the extreme level of detail may become too crowded. It is recommended do not use Hires.fix to generate large images. A larger resolution is equivalent to a larger canvas, making it easier for YOZORA to capture details. It is recommended to use 1536 x 1024 or higher for Hires.fix.
Negative prompts: It is recommended to use EasyNegative to provide brief and precise descriptions.
The name YOZORA means "strolling among the stars in the night sky". I hope that she can bring you the excitement and joy of a sky full of stars on a starry night.
Description
FAQ
Comments (52)
add Prompt plz,thanks
Would be really cool if you could add more examples with prompts :3
I wanna be friends if possible your model is amazing! I wanna learn how to use it efficiently Rakosz#2468
No advantages other than the largest size
This is a great model but also really big in filesize. I'd like to see a LORA 320 without VAE, for greater flexibility.
how to solve CUDA out of memory problem when generating with high resolution 1280+
Select Hirex.fix checkbox.
@FunJoo selected, still out of memory.
Should i lower resolution? but author say recommend high resolution
@fizzballs What is the model of your video card and how large is the video memory. My RTX3060Laptop-6G can generate a maximum size of 1600x900.
@FunJoo rtx3070ti laptop 8g. Always out of CUDA when resolution more than 1200 pixels
@fizzballs
I tried to use my rtx2080 desktop 8g to generate 1920x1080 pic, it was alse out of cuda memory.
Then I found some suggestions on the internet and edited webui-user.bat :
set COMMANDLINE_ARGS= --medvram --xformers
It works for me, you can try it.
@FunJoo yeah! it started working for more than 1200pixels which is great. but still out of memory for 1600x900 even tho im a 3070 ti :( , but thanks it got a little bit better
@fizzballs You can try to put stable_diffusion directory on disk C, Windows has different virtual memory policies for different hard disks.
请问有什么推荐的vae吗?裁剪后的版本集成了vae,那么在webui的设置里面,是要把vae选项设置成【无】吗
did you merge Yoneyama Mai LoRA to this?
求一个米山舞lora的基础sd模型,可以出钱,q3406167979
Hello, very good. Your model wanted to know how you do it. Do you have a link where you learned from? Thank you very much.
Why are all the images I load coming out pixelated?
I had a slight similar issue where SOME of the images generated had some pixelization - I ignored it because it was "AESTHETIC" - mostly thiscame out in post process, it wasn't the whole image either just like weird things
amazing!
it's hard to use
我在本地部署的sd内,选项一切换至这个模型,浏览器就报错,同时软件本体闪退,有大佬能救一救吗
贴贴错误信息
没有错误信息或者截图找谁都是白搭
不出意外就是这个模型太大了,本地配置不够。制作者都说这个是适合于高分辨率的出图。
我也一样的问题,不能下大模型,请问解决了吗
@CPW1997 显存没有16g的话很难跑,甚至无法加载
有没有大佬帮忙看看为啥一直报错阿
NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Download YoZoRa-V1-purned-fp16 instead.
稍微把你的错误信息复制到翻译软件翻译一下你都不至于在这问.
8.49G模型,你16G显存才能玩
how much clip skip should I use
为什么我图片稍微一开大点就直接炸了?详情可以看看我下面发的图
大概是因为你显存不够
大大的关键词真的绝了!!!ai菩萨!!!
啥关键词呢兄弟。意思是简单的关键词就能给出很好的结果?
为什么下载的模型文件用不了,显示文件名14459,文件类型是.File,已经放在modles\stable diffusion里了
过于绚丽,身上长衣渲染成乞丐装,撕得一条条。XD
woc大佬
vae文件放在models\Stable-diffusion 或models\vae 文件夹都可以吗?跑图能不能不手动选择这个vae,不然换几次模型可能就忘了哪个对应哪个了
你可以吧VAE和模型名字弄成一样的,自动选啊
太难跑了,12g显存稍微开高一点就爆显存,分辨率低了又很模糊,
大佬有没有国内的平台账号或者粉丝群呀?想请教一些模型使用的方法 太喜欢模型的画风了 但是无人讨论(哭)
www.xiaofanai.cn
origin pruned 差别大吗?8 千兆字节.... 我的硬盘要哭了 😂
CLIP有错误,要修复才行
nightSkyYOZORAStyle_yozoraV1PurnedFp16.safetensors
Hashes:
AUTOV2: 4b118b2d1b
AUTOV1: 94245290
tensor([[ 0, 0, 1, 2, 3, 5, 5, 6, 7, 9, 10, 10, 11, 12, 13, 15, 15, 16, 18, 18, 20, 20, 21, 22, 23, 25, 25, 26, 27, 28, 30, 31, 31, 32, 33, 35, 36, 36, 37, 38, 40, 40, 41, 42, 43, 45, 45, 46, 47, 48, 50, 50, 51, 52, 53, 55, 55, 57, 57, 58, 60, 60, 62, 62, 63, 64, 65, 67, 67, 68, 70, 70, 72, 72, 73, 75, 75]])
Type: torch.int64
Wrong CLIP indexes: [1, 2, 3, 4, 6, 7, 8, 11, 12, 13, 14, 16, 17, 19, 21, 22, 23, 24, 26, 27, 28, 29, 32, 33, 34, 37, 38, 39, 41, 42, 43, 44, 46, 47, 48, 49, 51, 52, 53, 54, 56, 58, 59, 61, 63, 64, 65, 66, 68, 69, 71, 73, 74, 76]
It is recommended to fix this checkpoint.
What a gorgeous original style! Just a delight for the eyes! Wonderful, thanks!
Details
Files
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
Mirrors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nsy.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors
NightSkyYOZORA-v1.safetensors
nightSkyYOZORAStyle_yozoraV1Origin.safetensors





