Stable Diffusion XL Turbo (SDXL Turbo) creates detailed images at stunning speeds, even at home.
On Tuesday, Stability AI released an artificial intelligence image synthesis model called "Stable Diffusion XL Turbo." The model is able to quickly generate images based on written prompts. In fact, the model is so fast that the company promotes it as a "real-time" image generator, as it is also able to quickly convert images from sources such as webcams
The main innovation of SDXL Turbo is its ability to produce image output in one step, a significant reduction from the 20-50 steps required by its predecessor. Stability AI attributes this leap in efficiency to a technique called adversarial diffusion distillation (ADD). ADD uses fractional extraction, where the model learns from existing image synthesis models, and adversarial loss, which enhances the model's ability to differentiate between real and generated images, improving the authenticity of the output.
In a research paper published Tuesday focusing on the ADD technology, Stability AI details the inner workings of the model. One of the advantages of SDXL Turbo is its similarity to generative adversarial networks (GANs), particularly in producing single-step image outputs.
SDXL Turbo's images aren't as detailed as those produced by SDXL at higher resolutions, so it's not a complete replacement for the previous model. However, it saves time with its amazing speed
To try it out, we ran SDXL Turbo natively on an Nvidia RTX 3060 using Automatic111 (with the same weight drops as SDXL weights), and it produced a 3-step 1024×1024 image in about 4 seconds compared to 20 steps with similar detail SDXL images take 26.4 seconds. Smaller images generate much faster (less than 1 second for 512×768), and of course more powerful graphics cards, like an RTX 3090 or 4090, will allow for faster generation times as well. Contrary to Stability's marketing, we found that SDXL Turbo images have the best detail at about 3-5 steps per image.
The generation speed of SDXL Turbo is the so-called "real-time". Stability AI says that on an NVIDIA A100, a powerful AI-tuned graphics processor, the model can generate a 512×512 image in 207ms, including encoding, a single denoising step and decoding. If consistency issues can be solved, such speeds could lead to real-time generation of AI video filters or experimental video game image generation. In this case, consistency means maintaining the same theme across multiple frames or generations.
Currently, SDXL Turbo is provided under a non-commercial research license restricting its use to personal, non-commercial purposes. The move has already received some criticism in the Stable Diffusion community, but Stability AI says it is open to commercial applications and invites interested parties to get in touch for more information.
Meanwhile, Stability AI has faced internal management issues, with one investor recently urging CEO Emad Mostaque to resign. Stability AI management has reportedly been exploring the possibility of selling the company to a larger entity, but this has not affected the pace at which Stability AI is releasing new products. Just last week, the company launched software called Stable Video Diffusion that converts still images into short video clips
Stability AI has provided a beta demo of its SDXL Turbo feature on its image editing platform Clipdrop. You can also try an unofficial live demo for free on Hugging Face. Obviously, all the usual caveats apply, including lack of provenance of training data and potential for misuse. Even with these unanswered questions, technological advances in AI image synthesis are certainly not slowing down.
If your friends like it, please pay attention to "Zhixin"!
The above is the detailed content of Stable Diffusion XL Turbo can generate AI images at 'real time' speed. For more information, please follow other related articles on the PHP Chinese website!