Stable Diffusion 3.5 Large Turbo is a Multimodal Diffusion Transformer (MMDiT) text-to-image model with Adversarial Diffusion Distillation (ADD) that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency, with a focus on fewer inference steps.
Please note: This model is released under the Stability Community License. Visit Stability AI to learn or contact us for commercial licensing details.
Model Description
Developed by: Stability AI
Model type: MMDiT text-to-image generative model
Model Description: This model generates images based on text prompts. It is an ADD-distilled Multimodal Diffusion Transformer that use three fixed, pretrained text encoders, and with QK-normalization.
License
Community License: Free for research, non-commercial, and commercial use for organizations or individuals with less than $1M in total annual revenue. More details can be found in the Community License Agreement. Read more at https://stability.ai/license.
For individuals and organizations with annual revenue above $1M: Please contact us to get an Enterprise License.
Model Sources
For local or self-hosted use, we recommend ComfyUI for node-based UI inference, or diffusers or GitHub for programmatic use.
ComfyUI: Github, Example Workflow
Huggingface Space: Space
Diffusers: See below.
GitHub: GitHub.
API Endpoints:
Implementation Details
QK Normalization: Implements the QK normalization technique to improve training Stability.
Adversarial Diffusion Distillation (ADD) (see the technical report), which allows sampling with 4 steps at high image quality.
Text Encoders:
CLIPs: OpenCLIP-ViT/G, CLIP-ViT/L, context length 77 tokens
T5: T5-xxl, context length 77/256 tokens at different stages of training
Training Data and Strategy:
This model was trained on a wide variety of data, including synthetic data and filtered publicly available data.
For more technical details of the original MMDiT architecture, please refer to the Research paper.