Home > Technology peripherals > AI > body text

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

WBOY
Release: 2023-04-09 18:51:01
forward
1618 people have browsed it

Recently, classifier-free guided diffusion models have been very effective in high-resolution image generation and have been widely used in large-scale diffusion frameworks, including DALL-E 2, GLIDE and Imagen.

However, a drawback of classifier-free guided diffusion models is that they are computationally expensive at inference time. Because they require evaluating two diffusion models—a class-conditional model and an unconditional model—hundreds of times.

In order to solve this problem, scholars from Stanford University and Google Brain proposed to use a two-step distillation method to improve the sampling efficiency of the classifier-free guided diffusion model.

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

##Paper address: https://arxiv.org/abs/2210.03142

How to refine the classifier-free guided diffusion model into a fast sampling model?

First, for a pre-trained classifier-free guidance model, the researchers first learned a single model to match the combined output of the conditional model and the unconditional model.

The researchers then gradually distilled this model into a diffusion model with fewer sampling steps.

It can be seen that on ImageNet 64x64 and CIFAR-10, this method is able to generate images that are visually comparable to the original model.

With only 4 sampling steps, FID/IS scores comparable to those of the original model can be obtained, while the sampling speed is as high as 256 times.

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

It can be seen that by changing the guidance weight w, the model distilled by the researcher can make a trade-off between sample diversity and quality. And with just one sampling step, visually pleasing results are achieved.

Background of the Diffusion Model

With samples x from the data distribution

, the noise scheduling functionStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!The researchers trained a model with parameters θ by minimizing the weighted mean square error Diffusion modelStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!. Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

where Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! is the signal-to-noise ratio, Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! and Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! is a pre-specified weighting function.

Once you have trained a diffusion modelStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, you can use the discrete-time DDIM sampler to sample from the model.

Specifically, the DDIM sampler starts from z1 ∼ N (0,I) and is updated as follows

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

Where, N is the total number of sampling steps. Using Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, the final sample is generated.

Classifier-free guidance is an effective method that can significantly improve the sample quality of conditional diffusion models and has been widely used including GLIDE, DALL·E 2 and Imagen.

It introduces a guidance weight parameterStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! to measure the quality and diversity of the sample. To generate samples, classifier-free guidance uses Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! as the prediction model at each update step to evaluate the conditional diffusion model Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! and the jointly trained Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!.

Sampling using classifier-free guidance is generally expensive since two diffusion models need to be evaluated for each sampling update.

In order to solve this problem, the researchers used progressive distillation, which is a method to increase the sampling speed of the diffusion model through repeated distillation.

Previously, this method could not be used directly to guide the distillation of the model, nor could it be used on samplers other than the deterministic DDIM sampler. In this paper, the researchers solved these problems.

Distillation of the guided diffusion model without a classifier

Their approach is to distill the guided diffusion model without a classifier.

For a trained teacher-led modelStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, they took two steps.

In the first step, the researcher introduced a continuous-time student modelStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, which has learnable parameters η1, to match the output of the teacher model at any time step t ∈ [0, 1]. After specifying a range of instructional intensitiesStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! that they were interested in, they used the following objectives to optimize the student model.

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

inStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!.

In order to combine the guidance weight w, the researcher introduced the w conditional model, where w serves as the input of the student model. To better capture the features, they applied Fourier embedding w and then incorporated it into the backbone of the diffusion model using the time-stepping method used by Kingma et al.

Since initialization plays a key role in performance, when the researchers initialized the student model, they used the same parameters as the teacher condition model (except for the newly introduced parameters related to w-conditioning).

The second step, the researcher imagined a discrete time step scenario, and gradually changed the learning model from The first step Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! is distilled into a student model Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! with learnable parameters η2 and fewer steps.

Among them, N represents the number of sampling steps. For Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! and Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, the researcher starts to train the student model, Let it match the output of the teacher model's two-step DDIM sampling with one step (eg: from t/N to t - 0.5/N, from t - 0.5/N to t - 1/N).

After distilling the 2N steps in the teacher model into N steps in the student model, we can use the new N-step student model as the new teacher model, and then repeat the same The process of distilling the teacher model into an N/2-step student model. At each step, the researchers initialize the chemical model using the parameters of the teacher model.

N-step deterministic and random sampling

Once the modelStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! is trained, for Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, the researcher can perform sampling through DDIM update rules. The researcher noticed that for the distillation modelStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, this sampling process is deterministic given the initializationStanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!.

In addition, researchers can also conduct N-step random sampling. Use a deterministic sampling step twice the original step size (i.e., the same as the N/2-step deterministic sampler), and then take a random step back (i.e., perturb it with noise) using the original step size.

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!, when t > 1/N, the following update rules can be used——

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

in,Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!.

When t=1/N, the researcher uses the deterministic update formula to derive Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times! from Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!.

It is worth noting that we note that performing random sampling requires evaluating the model at slightly different time steps compared to the deterministic sampler, and requires edge cases Small modifications to the training algorithm.

Other distillation methods

There is also a method that directly applies progressive distillation to the bootstrap model, That is, following the structure of the teacher model, the student model is directly distilled into a jointly trained conditional and unconditional model. After researchers tried it, they found that this method was not effective.

Experiments and Conclusions

Model experiments were conducted on two standard data sets: ImageNet (64*64) and CIFAR 10.

Different ranges of the guidance weight w were explored in the experiment, and it was observed that all ranges were comparable, so [wmin, wmax] = [0, 4] was used for the experiment. The first- and second-step models are trained using signal-to-noise loss.

Baseline standards include DDPM ancestral sampling and DDIM sampling.

To better understand how to incorporate the guidance weight w, a model trained with a fixed w value is used as a reference.

In order to make a fair comparison, the experiment uses the same pre-trained teacher model for all methods. Using the U-Net (Ronneberger et al., 2015) architecture as the baseline, and using the same U-Net backbone, a structure with w embedded in it is introduced as a two-step student model.

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

The above picture shows the performance of all methods on ImageNet 64x64. where D and S represent deterministic and stochastic samplers respectively.

In the experiment, the model training conditional on the guidance interval w∈[0, 4] was equivalent to the model training with w as a fixed value. When there are fewer steps, our method significantly outperforms the DDIM baseline performance, and basically reaches the performance level of the teacher model at 8 to 16 steps.

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

ImageNet 64x64 sampling quality evaluated by FID and IS scores

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

CIFAR-10 sampling quality evaluated by FID and IS scores

We also distill the encoding process of the teacher model, And conducted experiments on style transfer. Specifically, to perform style transfer between two domains A and B, images from domain A are encoded using a diffusion model trained on domain A, and then decoded using a diffusion model trained on domain B.

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!

Since the encoding process can be understood as an inverted sampling process of DDIM, we distilled both the encoder and decoder with classifier-free guidance and compared it with the DDIM encoder and decoder, as above As shown in the figure. We also explore the performance impact of changes to the boot strength w.

In summary, we propose a distillation method for guided diffusion models, and a random sampler to sample from the distilled model. Empirically, our method achieves visually high-experience sampling in only one step, and obtains FID/IS scores comparable to those of teachers in only 8 to 16 steps.

The above is the detailed content of Stanford/Google Brain: Double distillation, guided diffusion model sampling speed up 256 times!. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:51cto.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template