SwiftBrush V2: Make Your One-step Diffusion Model Better Than Its Teacher

Abstract

In this paper, we aim to enhance the performance of SwiftBrush, a prominent one-step text-to-image diffusion model, to be competitive with its multi-step Stable Diffusion counterpart. Initially, we explore the quality-diversity trade-off between SwiftBrush and SD Turbo: the former excels in image diversity, while the latter excels in image quality. Based on these insights, we add improvements to the model’s training approach. Key among these improvements is the optimization of the model’s initial setup and the adoption of an advanced training strategy that enhances the model’s learning efficiency, including auxialliary loss during the distillation process. Also with further improvements from post-training strategies, we establish a new state-of-the-art one-step diffusion model, achieving an FID of 8.14 and surpassing all GAN-based and multi-step Stable Diffusion models.

Publication
European Conference on Computer Vision (ECCV), 2024
Khoi Nguyen
Khoi Nguyen
AI Research Scientist

My research interests include Computer Vision and Machine Learning.

Related