Trigger word
sleepeace
Introduction
Uttering the word 'sleepeace' instantaneously transforms you into a picture of blissful slumber. This is Version 2 of LorA, featuring a delightfully relaxed mouth that is simply adorable. Different weights can be used to achieve a variety of expressions.
0.8 - 1.0: Any child will fall asleep instantly, dreaming sweet dreams.
0.4 - 0.7: There will be times when you won't want to put them to sleep. You'll be able to see a happy expression.
1.3 and above: No prompt can disturb this peaceful sleep.
Improvements in Version 2
We've improved the issues with hand rendering.
Issues with yellow patterns appearing around the eyes and hair have been addressed.
We've resolved problems with breakdowns when generating at resolutions above 512.
Adjustments were made to more faithfully adhere to prompts.
We've increased the amount of teaching material to accommodate more compositions.
We've tackled the issue of signs of overfitting appearing in several models.
ver LECO
In the SleePeace utilizing LECO, the following characteristics are present:
There is no influence of artistic style on the model.
Even when high weights are applied, images do not break down.
When using it, you need to be cautious of the following points:
You can apply a weight ranging from 1 ~ 5.
The effects applied depend on the weight of the prompt that the model holds, so try
adjusting the weight to apply for each model.
Too many tokens will reduce the effect of LoRA. Adjust by increasing the weight to apply.
In conclusion
As our main concept revolves around 'becoming blissfully asleep in bed', in most cases, the composition will depict sleeping in bed. While this is the intended effect, there may be occasions when you'd want a blissful sleeping face outside of a bed. In Version 2, we've adjusted the teacher images to accommodate such needs. We hope this will be of great assistance to you.
Additionally, while we've created Version 2 to consistently output better results, the 'sleepeace' that Version 1 occasionally displays is absolutely splendid, and there may be instances where it outperforms Version 2. Both are our favorite LorAs, so choosing either one depending on the situation could yield excellent results. We hope this will be of great assistance to you.
Description
Trigger word
sleepeace
Introduction
Uttering the word 'sleepeace' instantaneously transforms you into a picture of blissful slumber. This is Version 2 of LorA, featuring a delightfully relaxed mouth that is simply adorable. Different weights can be used to achieve a variety of expressions.
0.8 - 1.0: Any child will fall asleep instantly, dreaming sweet dreams.
0.4 - 0.7: There will be times when you won't want to put them to sleep. You'll be able to see a happy expression.
1.3 and above: No prompt can disturb this peaceful sleep.
Improvements in Version 2
We've improved the issues with hand rendering.
Issues with yellow patterns appearing around the eyes and hair have been addressed.
We've resolved problems with breakdowns when generating at resolutions above 512.
Adjustments were made to more faithfully adhere to prompts.
We've increased the amount of teaching material to accommodate more compositions.
We've tackled the issue of signs of overfitting appearing in several models.
ver LECO
In the SleePeace utilizing LECO, the following characteristics are present:
There is no influence of artistic style on the model.
Even when high weights are applied, images do not break down.
When using it, you need to be cautious of the following points:
The effects applied depend on the weight of the prompt that the model holds, so try
adjusting the weight to apply for each model.
Too many tokens will reduce the effect of LoRA. Adjust by increasing the weight to apply.
In conclusion
As our main concept revolves around 'becoming blissfully asleep in bed', in most cases, the composition will depict sleeping in bed. While this is the intended effect, there may be occasions when you'd want a blissful sleeping face outside of a bed. In Version 2, we've adjusted the teacher images to accommodate such needs. We hope this will be of great assistance to you.
Additionally, while we've created Version 2 to consistently output better results, the 'sleepeace' that Version 1 occasionally displays is absolutely splendid, and there may be instances where it outperforms Version 2. Both are our favorite LorAs, so choosing either one depending on the situation could yield excellent results. We hope this will be of great assistance to you.
FAQ
Comments (6)
exactly what i was looking for.
you offered a great service to the community, sir!
what kind of training setting do you use so that it gets the concept in generations, but not an overbake on style?
@Lora1111
Thank you for your comment.
We have conducted tests on the generation of LoRA outputs with more than 20 different types of models.
After achieving a satisfactory LoRA with all the models, we have adjusted the LoRA Block Weight. The Block Weight applied is 1,0,0,1,1,0,1,0,1,1,1,1,0.5,1,0,1,1.
From the LoRA posting page, both the image file used for training along with LoRA and the configuration file can be downloaded.
We hope the downloaded files will be of reference to you.
Does this answer your question?
Thanks.
@yunyun Hi thanks for the reply! i will have to look into block weights. Although the question is of a more basic nature. For example, i use 200/200 dim alpha for training styles but that is much too strong for concepts?
Do you have a recommended learning rate, or dim/alpha for concepts in general? When i train a concept it can get the result, but the Artstyle from the images in the dataset is very strong in results and not sure how to minimize that.
So for example it might be a girl eating banana concept.
And during testing by the time a girl eating a banana shows up in the results, the artstyle is so strong it overpowers the original checkpoints style, and can't be used with characters or other style lora's.
Is what i'm doing wrong with the images in dataset or related to training settings like dim/alpha, learning rate, and so on?
Thanks for sharing your dataset files by the way! can see separating things in multiple folders might be important
@Lora1111
Based upon my accumulated experience and insights, it is widely acknowledged that the optimal settings for dim/alpha generally range between dim:16-128 and alpha:1-64. Importantly, it has been recommended that the value of alpha should always be set at or below half of the corresponding dim to achieve the most favorable outcomes. This principle is consistent across not only the Concept LoRA but also various other types of LoRA. Consequently, configurations such as dim:200 or alpha:200 are considered to be beyond the recommended spectrum.
From my own analytical observations, I interpret the dim parameter as a metric that gauges the granularity of how images within a dataset are learned. When this parameter is configured to 128, there is a palpable sense that the images from the dataset are profoundly integrated into the learning process. On the contrary, a setting of 32 gives an impression of insufficient learning. The alpha parameter, in my perspective, acts as a pivotal threshold to modulate the intensity of learning initiated by dim. In the majority of scenarios, a value that lies between 8 and 16 for alpha is deemed apt.
In the process of crafting LoRA, I occasionally venture into using larger configurations, such as dim:196 coupled with alpha:128. This is particularly true when there is an underlying intention to accentuate a distinct concept. Admittedly, these adventurous configurations do not always lead to success, yet they offer invaluable insights into the intricate interplay between dim and alpha.
The selection of the optimizer_type holds paramount importance. The quintessential values for dim and alpha can exhibit variability contingent upon the optimizer_type in use. Presently, my preference gravitates towards DAdaptAdam, an optimizer characterized by its extended training duration but commendably mitigates the risk of overfitting. Furthermore, the ideal learning rate is intrinsically linked to the configurations of optimizer_type, dim, alpha, and epoch. In my recent endeavors, I have embarked on a comprehensive examination of learning rates, spanning from 5e-5 to 1.0.
Navigating the challenge of dominantly projecting dataset image characteristics onto diverse model styles remains an ongoing endeavor for me. One of the strategies I've adopted entails meticulous tagging adjustments during the training phase. For instance, when the dataset encompasses an anime-themed depiction of a "girl consuming a banana", I diligently incorporate tags such as anime and anime_coloring to the image caption, aiming to mitigate the dominance of that specific style. For images resonating with a real-life photographic essence, I bestow them with the real_photo tag, facilitating a clearer demarcation between disparate styles. In summation, if the objective is to extricate specific elements from LoRA, a prudent approach would be to intentionally blend datasets comprising those elements and judiciously annotate their captions with pertinent tags.
(I genuinely appreciate your patience in perusing this text. I would kindly draw your attention to the fact that this text has been meticulously translated for a comprehensive understanding in English, and there might be subtle nuances in interpretation.)
@yunyun Hi thanks for the detailed reply.
Your way is definitely is more logical. To tag artstyle elements like anime, anime coloring or tag specific artists! that way even if the dataset is heavier on one particular artist one could possibly avoid most of that artstyle by .. just not including the tag or putting the artist style or name in the negative.
Have never used dadaptation's before but will give them a try after reworking my dataset and redoing the tags.
Thanks for the advice :)
Details
Files
Available On (1 platform)
Same model published on other platforms. May have additional downloads or version variants.





