Home > Technology peripherals > AI > body text

The fastest model at 1024 resolution, ByteDance Vincent graph open model SDXL-Lightning released

PHPz
Release: 2024-02-24 12:37:26
forward
1473 people have browsed it

Model|https://www.php.cn/link/36ef259d4d9967f3a81aa326160128c7

Paper| https://www.php.cn/link/ca0525bfe5cab4c577d169d3343a5452

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布


##1. Lightning Image generation

Generative AI is gaining global attention for its ability to create stunning images and even videos based on text prompts. Current state-of-the-art generative models rely on diffusion, an iterative process that gradually transforms noise into image samples. This process requires huge computing resources and is slow.

In the process of generating high-quality image samples, the processing time of a single image is about 5 seconds, which usually requires multiple (20 to 40) calls to a huge Neural network. This speed limits application scenarios that require fast, real-time generation. How to improve the generation quality while speeding up is a hot area of ​​current research and the core goal of our work.

SDXL-Lightning breaks through this barrier through an innovative technology -

Progressive Adversarial Distillation - to achieve unprecedented generation speeds. The model is able to generate images of extremely high quality and resolution in just 2 or 4 steps, reducing computational cost and time by a factor of ten. Our method can even generate images in 1 step for timeout-sensitive applications, albeit at a slight sacrifice in quality.

SDXL-Lightning not only offers a speed advantage, but also excels in image quality, outperforming previous acceleration technologies in evaluations. It enables higher resolution and richer details while maintaining good diversity and image-text matching.

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

Speed ​​comparison diagram

Original model (20 steps), SDXL-Lightning model (2 steps)


2. Model effect

SDXL-Lightning The model can be achieved through 1 step, 2 steps, 4 steps and 8 steps Generate images. The more inference steps there are, the better the image quality.

The following is the 4-step generated result——


1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布##A girl smiling

##A pickup truck going up a mountain switchback1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

A fish on a bicycle, colorful art1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布A close-up of an Asian lady with sunglasses

# #A beautiful cup

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

##Mona Lisa, sketch

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

##A panda swimming

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

A pickup truck going up a mountain switchback

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

House in the desert, surreal landscapes

The following is the result of 2 steps——

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

Furniture design for a living room

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

A cinematic shot of a baby raccoon wearing an intricate Italian priest robe

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

A dog with soft fur and bright eyes jumping after a toy, in a cozy living room

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

A tea cup containing clouds

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

A family, medium shot

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

##Baby playing with toys in the snow

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

An old man and a dog are walking in the park

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

Dragon driving a car

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

A monkey making latte art

Compared to previous methods (Turbo and LCM), our method generates images that are significantly improved in detail and more faithful to the original generative model. Style and layout.

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布


3. Give back to the community and open the model

The wave of open source has become the key to promoting the rapid development of artificial intelligence Strength, ByteDance is also proud to be part of this wave. Our model is based on SDXL, currently the most popular open model for text generation images, which already has a thriving ecosystem. Now, we’ve decided to open up SDXL-Lightning to developers, researchers, and creative practitioners around the world so that they can access and apply this model to further drive innovation and collaboration across the industry.

When designing SDXL-Lightning, we considered

compatibility with the open model community. Many artists and developers in the community have created a variety of stylized image generation models, such as cartoon and anime styles. In order to support these models, we provide SDXL-Lightning as a speed-up plug-in, which can be seamlessly integrated into these various styles of SDXL models to speed up image generation for various models.

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

SDXL-Lightning The model can also be combined with the currently very popular control plug-in ControlNet to achieve extremely fast and controllable image generation.

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

SDXL-Lightning The model also supports ComfyUI, the most popular generation software in the open source community. The model can be loaded directly for use:

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布


4. About technical details

Theoretically, image generation is a gradual transformation process from noise to clear image . In this process, the neural network learns the gradients at various positions in the transformation flow.

The specific steps to generate an image are as follows:

First, we randomly sample a noise sample at the starting point of the stream, and then use a neural network to calculate the gradient. Based on the gradient at the current position, we make small adjustments to the sample and then repeat the process. With each iteration, the samples get closer to the final image distribution until a clear image is obtained.

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

Figure: Generation flow process (picture from: https://www .php.cn/link/5c9b5c47258cf1499c2dc64b7072e735

Because the generation flow is complex and non-linear, the generation process must only take a small step at a time to reduce the accumulation of gradient errors, so frequent calculations of the neural network are required , this is the reason for the large amount of calculation.

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

Figure: Curve process(Picture from: https://www.php.cn/link/d7bbb6396ce5daf19ec6cf4bb4453137

In order to reduce the number of steps required to generate images, many studies have been devoted to finding solutions. Some studies have proposed methods that can reduce the error sampling methods, while other research attempts to make the generation flow more linear. Although these methods have made progress, they still require more than 10 inference steps to generate images.

Another method is model distillation, It is able to generate high-quality images in less than 10 inference steps. Instead of calculating the gradient at the current flow position, model distillation changes the target of model prediction and directly lets it predict the next farther flow position. Specifically , we train a student network to directly predict the results after the teacher network completes multi-step inference. Such a strategy can significantly reduce the number of required inference steps. By repeatedly applying this process, we can further reduce the number of inference steps. This approach It is called progressive distillation by previous research.

1024 分辨率下最快模型,字节跳动文生图开放模型 SDXL-Lightning 发布

Figure: Progressive distillation, the student network predicts the result of the teacher network after multiple steps


In actual operation, student networks often have difficulty accurately predicting future flow positions. The error amplifies with the accumulation of each step, resulting in less than 8 steps of inference. The images produced by the model start to become blurry.

To solve this problem, our strategy is not to force the student network to accurately match the predictions of the teacher network, but to let the student network keep in line with the teacher network in the probability distribution Consistent. In other words, the student network is trained to predict a probabilistically possible location, and even if this location is not completely accurate, we do not penalize it. This goal is achieved through adversarial training, which introduces an additional discriminant network to help achieve distribution matching of student network and teacher network outputs.

This is a brief overview of our research methods. In the technical paper (https://www.php.cn/link /ca0525bfe5cab4c577d169d3343a5452), we provide a more in-depth theoretical analysis, training strategy, and specific formulation details of the model.

5. Beyond SDXL-Lightning

Although this study mainly explores how to use SDXL-Lightning technology for image generation, the application potential of our proposed progressive adversarial distillation method is not limited. in the category of static images. This innovative technology can also be used to generate video, audio and other multi-modal content quickly and with high quality. We sincerely invite you to experience SDXL-Lightning on the HuggingFace platform and look forward to your valuable comments and feedback.

Model:https://www.php.cn/link/36ef259d4d9967f3a81aa326160128c7

Paper:https://www.php.cn/link/ca0525bfe5cab4c577d169d3343a5452

The above is the detailed content of The fastest model at 1024 resolution, ByteDance Vincent graph open model SDXL-Lightning released. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template