We are thrilled to announce the alpha release of Flux.1 Lite, an 8B parameter transformer model distilled from the FLUX.1-dev model. This version uses 7 GB less RAM and runs 23% faster while maintaining the same precision (bfloat16) as the original model.
Flux.1 Lite is ready to unleash your creativity! For the best results, we strongly recommend using a guidance_scale of 3.5 and setting n_steps between 22 and 30 .