Woman-LoRA-FA-Woman-Text-Encoder-Enhancer
added by Casanova & kohya-ss
experimentally
LORA-FA might not work at all for all ¯\_(ツ)_/¯ 🤟 🥃
🚀 Introducing the Stable Diffusion XL Lora FA 🚀
Dive into the future of text encoding with our cutting-edge Stable Diffusion XL Lora FA. Trained on a massive dataset of 5850 files, (SFW)!
🔍 Core Features:
TEXT ENCODER ONLY: Focused and specialized, this model is all about precision in text encoding.
Dependence on UniNet: Our model heavily leans on the current model's unet, ensuring top-tier performance.
ss_network_module: Powered by the experimental Kohoya "networks.lora_fa", it's built for efficiency and speed.
🛠 Technical Specs:
Adaptive Noise Scale: A fine-tuned 0.011, ensuring optimal noise management.
Max Bucket Resolution: A robust 2048, catering to high-resolution needs.
Data Loader Workers: With a max of 8, it's all about multitasking and speed.
Max Resolution: A crystal clear "1024x1024", because clarity matters.
Noise Offset: Set at 0.08 with the "Original" type, it's all about maintaining the perfect balance.
CPU Threads: 8 threads working in harmony for seamless processing.
Optimizer: The "Prodigy" is at the helm, steering the model towards perfection with arguments like weight decay, decoupling, and bias correction.
Pretrained Model: Based on the renowned "FFusion/FFusionXL-BASE".
Training Comment: "FFusion Stage o7 - WoMM-TE", because every masterpiece has its unique signature.
Step into the next generation of text encoding. Welcome to the Stable Diffusion XL Lora FA experience. 🌌
https://arxiv.org/abs/2308.03303
Trained on 5850 File(s) 3,513,965,736 bytes [SFW]
TEXT ENCODER ONLY
heavily depended on the current model's unet.
By leveraging the strengths of the current model's unet, Lora FA ensures that you get the best of both worlds: efficiency and top-tier performance.
The new LoRa FA algorithm is an adaptive algorithm that dynamically selects the spreading factor (SF) for each transmission based on the current channel conditions. This is in contrast to the previous algorithm, which used a fixed SF for all transmissions.
The new algorithm works by first estimating the signal-to-noise ratio (SNR) of the channel. The SNR is a measure of how strong the signal is compared to the noise. The higher the SNR, the better the channel conditions.
Once the SNR is estimated, the algorithm then selects the SF that will provide the best trade-off between range and reliability. For example, if the SNR is low, the algorithm will select a lower SF to improve reliability. If the SNR is high, the algorithm will select a higher SF to improve range.
The new LoRa FA algorithm has been shown to significantly improve the performance of LoRaWAN networks. In one study, the algorithm was able to reduce the packet error rate (PER) by up to 20%.
Here are some of the key features of the new LoRa FA algorithm:
It is adaptive, meaning that it can dynamically select the SF for each transmission based on the current channel conditions.
It is efficient, meaning that it does not require a lot of processing power or bandwidth.
It is robust, meaning that it can work well in a variety of channel conditions.
The new LoRa FA algorithm is a significant improvement over the previous algorithm and is expected to make LoRa networks more reliable and efficient.
Added by Casanova & kohya-ss
experimentally
ss_network_module: "networks.lora_fa"
"adaptive_noise_scale": 0.011,
"max_bucket_reso": 2048,
"max_data_loader_n_workers": "8",
"max_resolution": "1024,1024",
"noise_offset": 0.08,
"noise_offset_type": "Original",
"num_cpu_threads_per_process": 8,
"optimizer": "Prodigy",
"optimizer_args": "weight_decay=0.01 decouple=True d0=0.0001 use_bias_correction=True",
"pretrained_model_name_or_path": "FFusion/FFusionXL-BASE",
"training_comment": "FFusion Stage o7 - WoMM-TE",