CivArchive
    FF-Woman-LoRA-FA-Text-Encoder-Enhancer - v1.0 FF-WoMM-XL-FA-v0208-
    NSFW
    Preview 1
    Preview 2
    Preview 3
    Preview 4
    Preview 5
    Preview 6
    Preview 7
    Preview 8
    Preview 9
    Preview 10
    Preview 11
    Preview 12
    Preview 13
    Preview 14
    Preview 15
    Preview 16
    Preview 17
    Preview 18
    Preview 19
    Preview 20

    Woman-LoRA-FA-Woman-Text-Encoder-Enhancer

    added by Casanova & kohya-ss
    experimentally

    LORA-FA might not work at all for all ¯\_(ツ)_/¯ 🤟 🥃



    🚀 Introducing the Stable Diffusion XL Lora FA 🚀

    Dive into the future of text encoding with our cutting-edge Stable Diffusion XL Lora FA. Trained on a massive dataset of 5850 files, (SFW)!

    🔍 Core Features:

    • TEXT ENCODER ONLY: Focused and specialized, this model is all about precision in text encoding.

    • Dependence on UniNet: Our model heavily leans on the current model's unet, ensuring top-tier performance.

    • ss_network_module: Powered by the experimental Kohoya "networks.lora_fa", it's built for efficiency and speed.

    🛠 Technical Specs:

    • Adaptive Noise Scale: A fine-tuned 0.011, ensuring optimal noise management.

    • Max Bucket Resolution: A robust 2048, catering to high-resolution needs.

    • Data Loader Workers: With a max of 8, it's all about multitasking and speed.

    • Max Resolution: A crystal clear "1024x1024", because clarity matters.

    • Noise Offset: Set at 0.08 with the "Original" type, it's all about maintaining the perfect balance.

    • CPU Threads: 8 threads working in harmony for seamless processing.

    • Optimizer: The "Prodigy" is at the helm, steering the model towards perfection with arguments like weight decay, decoupling, and bias correction.

    • Pretrained Model: Based on the renowned "FFusion/FFusionXL-BASE".

    • Training Comment: "FFusion Stage o7 - WoMM-TE", because every masterpiece has its unique signature.

    Step into the next generation of text encoding. Welcome to the Stable Diffusion XL Lora FA experience. 🌌

    https://arxiv.org/abs/2308.03303


    Trained on 5850 File(s) 3,513,965,736 bytes [SFW]

    TEXT ENCODER ONLY
    heavily depended on the current model's unet.



    By leveraging the strengths of the current model's unet, Lora FA ensures that you get the best of both worlds: efficiency and top-tier performance.

    The new LoRa FA algorithm is an adaptive algorithm that dynamically selects the spreading factor (SF) for each transmission based on the current channel conditions. This is in contrast to the previous algorithm, which used a fixed SF for all transmissions.
    
    The new algorithm works by first estimating the signal-to-noise ratio (SNR) of the channel. The SNR is a measure of how strong the signal is compared to the noise. The higher the SNR, the better the channel conditions.
    
    Once the SNR is estimated, the algorithm then selects the SF that will provide the best trade-off between range and reliability. For example, if the SNR is low, the algorithm will select a lower SF to improve reliability. If the SNR is high, the algorithm will select a higher SF to improve range.
    
    The new LoRa FA algorithm has been shown to significantly improve the performance of LoRaWAN networks. In one study, the algorithm was able to reduce the packet error rate (PER) by up to 20%.
    
    Here are some of the key features of the new LoRa FA algorithm:
    
    It is adaptive, meaning that it can dynamically select the SF for each transmission based on the current channel conditions.
    It is efficient, meaning that it does not require a lot of processing power or bandwidth.
    It is robust, meaning that it can work well in a variety of channel conditions.
    The new LoRa FA algorithm is a significant improvement over the previous algorithm and is expected to make LoRa networks more reliable and efficient.

    Added by Casanova & kohya-ss
    experimentally

    ss_network_module: "networks.lora_fa"

    "adaptive_noise_scale": 0.011,

    "max_bucket_reso": 2048,

    "max_data_loader_n_workers": "8",

    "max_resolution": "1024,1024",

    "noise_offset": 0.08,

    "noise_offset_type": "Original",

    "num_cpu_threads_per_process": 8,

    "optimizer": "Prodigy",

    "optimizer_args": "weight_decay=0.01 decouple=True d0=0.0001 use_bias_correction=True",

    "pretrained_model_name_or_path": "FFusion/FFusionXL-BASE",

    "training_comment": "FFusion Stage o7 - WoMM-TE",

    Description

    Date: 2023-08-27T19:42:21 Title: FF-WoMM-XL-FA-v0208-TE

    Resolution: 1024x1024 Architecture: stable-diffusion-xl-v1-base/lora

    Network Dim/Rank: ??.0 Alpha: ??.0

    Module: networks.lora_fa

    Text Encoder (1) weight average magnitude: 6.208398141837334

    Text Encoder (1) weight average strength: 0.020247638324627188

    Text Encoder (2) weight average magnitude: 6.651438395599051

    Text Encoder (2) weight average strength: 0.01676657589104??

    No UNet found in this LoRA

    FAQ

    LORA
    SDXL 1.0
    by idle

    Details

    Downloads
    641
    Platform
    SeaArt
    Platform Status
    Available
    Created
    4/1/2024
    Updated
    9/24/2025
    Deleted
    -

    Files

    Available On (1 platform)

    Same model published on other platforms. May have additional downloads or version variants.