Table of Contents
Generation model​
Variational Diffusion Model​
Three equivalent explanations
Score-based generative model
Home Technology peripherals AI Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Apr 11, 2023 pm 07:46 PM
Model math

In recent times, AI painting has been very popular.

While you marvel at AI’s painting capabilities, what you may not know is that the diffusion model plays a big role in it. Take the popular model OpenAI's DALL·E 2 as an example. Just enter a simple text (prompt) and it can generate multiple 1024*1024 high-definition images.

Not long after DALL·E 2 was announced, Google subsequently released Imagen, a text-to-image AI model that can generate realistic images of the scene from a given text description. Image.

Just a few days ago, Stability.Ai publicly released the latest version of the text generation image model Stable Diffusion, and the images it generated reached commercial grade.

Since Google released DDPM in 2020, the diffusion model has gradually become a new hot spot in the field of generation. Later, OpenAI launched GLIDE, ADM-G models, etc., which made the diffusion model popular.

Many researchers believe that the text image generation model based on the diffusion model not only has a small number of parameters, but also generates higher quality images, and has the potential to replace GAN.

However, the mathematical formula behind the diffusion model has discouraged many researchers. Many researchers believe that it is much more difficult to understand than VAE and GAN.

Recently, researchers from Google Research wrote "Understanding Diffusion Models: A Unified Perspective". This article shows the mathematical principles behind the diffusion model in extremely detailed ways, with the purpose of letting others Researchers can follow along and learn what diffusion models are and how they work. Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Paper address: https://arxiv.org/pdf/2208.11970.pdf As for how "mathematical" this paper is, the author of the paper describes it like this : We demonstrate the mathematics behind these models in excruciating detail.

The paper is divided into 6 parts, mainly including generative models; ELBO, VAE and hierarchical VAE; variational diffusion models; score-based generative models, etc.

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

The following is an excerpt from the paper for introduction:

Generation model​

Given an observation sample x in the distribution, the goal of the generative model is to learn to model its true data distribution p(x). After the model is learned, we can generate new samples. Additionally, in some forms, we can also use learning models to evaluate observations or sample data.

There are several important directions in the current research literature. This article only briefly introduces them at a high level, mainly including: GAN, which models the sampling process of complex distributions. Learn adversarially. Generative models, which we can also call "likelihood-based" methods, can assign high likelihood to observed data samples and usually include autoregression, normalized flow, and VAE. Energy-based modeling, in this approach, the distribution is learned as an arbitrary flexible energy function and then normalized. In score-based generative models, instead of learning to model the energy function itself, the score based on the energy model is learned as a neural network. ​

In this study, this paper explores and reviews diffusion models, as shown in the paper, they have likelihood-based and score-based interpretations.

Variational Diffusion Model​

Looking at it in a simple way, a Variational Diffusion Model (VDM) can be considered as having three main Restricted (or assumed) Markov hierarchical variational autoencoders (MHVAE), they are:

  • The latent dimension is exactly the same as the data dimension;
  • The structure of the latent encoder at each time step is not learned, it is predefined as linear Gaussian model. In other words, it is a Gaussian distribution centered on the output of the previous time step;
  • #The Gaussian parameters of the potential encoder change over time, and the potential distribution standard at the final time step T in the process is Gaussian distribution.

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Visual representation of the variational diffusion model

Additionally, we explicitly maintain Markov properties between hierarchical transformations from standard Markov hierarchical variational autoencoders. They expanded the implications of the above three main assumptions one by one.

Starting from the first assumption, due to misuse of symbols, real data samples and latent variables can now be represented as x_t, where t=0 represents real sample data, t ∈ [1 , T] represents the corresponding latent variable, and its hierarchical structure is indexed by t. The VDM posterior is the same as the MHVAE posterior, but can now be rewritten as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

From the second hypothesis, It is known that the distribution of each latent variable in the encoder is a Gaussian distribution centered on the previously stratified latent variable. Different from MHVAE, the structure of the encoder at each time step is not learned, it is fixed as a linear Gaussian model, where the mean and standard deviation can be preset as hyperparameters or learned as parameters. Mathematically, the encoder transformation is expressed as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

For the third hypothesis, α_t is fixed or learnable according to The obtained schedule evolves over time, so that the distribution of the final latent variable p(x_T) is a standard Gaussian distribution. Then the joint distribution of MHVAE can be updated, and the joint distribution of VDM can be written as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

In general, this A series of assumptions describe the stable noise of an image as it evolves over time. The researchers progressively corrupted the image by adding Gaussian noise until it eventually became identical to Gaussian noise.

Similar to any HVACE, VDM can be optimized by maximizing the Evidence Lower Bound (ELBO), which can be derived as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

The explanation process of ELBO is shown in Figure 4 below:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Three equivalent explanations

As demonstrated before, a variational diffusion model can be trained simply by learning a neural network to predict the original x_t from an arbitrary noisy version x_t and its time index t Natural image x_0. However, there are two equivalent parameterizations of x_0, allowing two further interpretations of the VDM to be developed.

First, you can use the heavy parameterization technique. When deriving the form of q(x_t|x_0), Equation 69 can be rearranged as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Bring it into the previously derived true denoising transformation mean µ_q(x_t, x_0), you can re-derive it as follows: Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

So you can The approximate denoising transformed mean µ_θ(x_t, t) is set as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

and the corresponding optimization problem becomes as follows :

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

In order to derive the three common interpretations of variational diffusion models, one needs to turn to the Tweedie formula, which refers to is that when a sample is given, the true mean of an exponential family distribution can be estimated by the maximum likelihood estimate of the sample (also called the empirical mean) plus some correction term involving the estimated score.

Mathematically speaking, for a Gaussian variable z ∼ N (z; µ_z, Σ_z), the Tweedie formula is expressed as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Score-based generative model

Researchers have shown that variational diffusion models can be learned simply by optimizing a neural network s_θ(x_t, t) is obtained to predict a score function ∇ log p(x_t). However, the scoring term in the derivation comes from the application of Tweedie's formula. This does not necessarily provide good intuition or insight into what exactly the score function is or why it is worth modeling. ​

Fortunately, this intuition can be obtained with the help of another type of generative model, namely the score-based generative model. The researchers indeed demonstrated that the previously derived VDM formulation has an equivalent fraction-based generative modeling formulation, allowing flexible switching between the two interpretations. ​

To understand why optimizing a score function makes sense, the researchers revisited energy-based models. An arbitrary flexible probability distribution can be written as follows:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

A way to avoid calculating or modeling normalization constants The method is to use the neural network s_θ(x) to learn the score function ∇ log p(x) of the distribution p(x). It is observed that both sides of Equation 152 can be logarithmically differentiated:

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

which can be freely expressed as a neural network , does not involve any normalization constants. The score function can be optimized by minimizing the Fisher divergence using the true score function. ​

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Intuitively speaking, the score function defines a vector field over the entire space where the data x is located, and points to The model is shown in Figure 6 below.

Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective

Finally, the researchers established a variational diffusion model and a score-based Generate explicit relationships between models.

Please refer to the original paper for more details.

The above is the detailed content of Is the mathematics behind the diffusion model too difficult to digest? Google makes it clear with a unified perspective. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Chat Commands and How to Use Them
4 weeks ago By 尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo The world's most powerful open source MoE model is here, with Chinese capabilities comparable to GPT-4, and the price is only nearly one percent of GPT-4-Turbo May 07, 2024 pm 04:13 PM

Imagine an artificial intelligence model that not only has the ability to surpass traditional computing, but also achieves more efficient performance at a lower cost. This is not science fiction, DeepSeek-V2[1], the world’s most powerful open source MoE model is here. DeepSeek-V2 is a powerful mixture of experts (MoE) language model with the characteristics of economical training and efficient inference. It consists of 236B parameters, 21B of which are used to activate each marker. Compared with DeepSeek67B, DeepSeek-V2 has stronger performance, while saving 42.5% of training costs, reducing KV cache by 93.3%, and increasing the maximum generation throughput to 5.76 times. DeepSeek is a company exploring general artificial intelligence

AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao AI subverts mathematical research! Fields Medal winner and Chinese-American mathematician led 11 top-ranked papers | Liked by Terence Tao Apr 09, 2024 am 11:52 AM

AI is indeed changing mathematics. Recently, Tao Zhexuan, who has been paying close attention to this issue, forwarded the latest issue of "Bulletin of the American Mathematical Society" (Bulletin of the American Mathematical Society). Focusing on the topic "Will machines change mathematics?", many mathematicians expressed their opinions. The whole process was full of sparks, hardcore and exciting. The author has a strong lineup, including Fields Medal winner Akshay Venkatesh, Chinese mathematician Zheng Lejun, NYU computer scientist Ernest Davis and many other well-known scholars in the industry. The world of AI has changed dramatically. You know, many of these articles were submitted a year ago.

Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Google is ecstatic: JAX performance surpasses Pytorch and TensorFlow! It may become the fastest choice for GPU inference training Apr 01, 2024 pm 07:46 PM

The performance of JAX, promoted by Google, has surpassed that of Pytorch and TensorFlow in recent benchmark tests, ranking first in 7 indicators. And the test was not done on the TPU with the best JAX performance. Although among developers, Pytorch is still more popular than Tensorflow. But in the future, perhaps more large models will be trained and run based on the JAX platform. Models Recently, the Keras team benchmarked three backends (TensorFlow, JAX, PyTorch) with the native PyTorch implementation and Keras2 with TensorFlow. First, they select a set of mainstream

KAN, which replaces MLP, has been extended to convolution by open source projects KAN, which replaces MLP, has been extended to convolution by open source projects Jun 01, 2024 pm 10:03 PM

Earlier this month, researchers from MIT and other institutions proposed a very promising alternative to MLP - KAN. KAN outperforms MLP in terms of accuracy and interpretability. And it can outperform MLP running with a larger number of parameters with a very small number of parameters. For example, the authors stated that they used KAN to reproduce DeepMind's results with a smaller network and a higher degree of automation. Specifically, DeepMind's MLP has about 300,000 parameters, while KAN only has about 200 parameters. KAN has a strong mathematical foundation like MLP. MLP is based on the universal approximation theorem, while KAN is based on the Kolmogorov-Arnold representation theorem. As shown in the figure below, KAN has

Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Hello, electric Atlas! Boston Dynamics robot comes back to life, 180-degree weird moves scare Musk Apr 18, 2024 pm 07:58 PM

Boston Dynamics Atlas officially enters the era of electric robots! Yesterday, the hydraulic Atlas just "tearfully" withdrew from the stage of history. Today, Boston Dynamics announced that the electric Atlas is on the job. It seems that in the field of commercial humanoid robots, Boston Dynamics is determined to compete with Tesla. After the new video was released, it had already been viewed by more than one million people in just ten hours. The old people leave and new roles appear. This is a historical necessity. There is no doubt that this year is the explosive year of humanoid robots. Netizens commented: The advancement of robots has made this year's opening ceremony look like a human, and the degree of freedom is far greater than that of humans. But is this really not a horror movie? At the beginning of the video, Atlas is lying calmly on the ground, seemingly on his back. What follows is jaw-dropping

FisheyeDetNet: the first target detection algorithm based on fisheye camera FisheyeDetNet: the first target detection algorithm based on fisheye camera Apr 26, 2024 am 11:37 AM

Target detection is a relatively mature problem in autonomous driving systems, among which pedestrian detection is one of the earliest algorithms to be deployed. Very comprehensive research has been carried out in most papers. However, distance perception using fisheye cameras for surround view is relatively less studied. Due to large radial distortion, standard bounding box representation is difficult to implement in fisheye cameras. To alleviate the above description, we explore extended bounding box, ellipse, and general polygon designs into polar/angular representations and define an instance segmentation mIOU metric to analyze these representations. The proposed model fisheyeDetNet with polygonal shape outperforms other models and simultaneously achieves 49.5% mAP on the Valeo fisheye camera dataset for autonomous driving

Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! Tesla robots work in factories, Musk: The degree of freedom of hands will reach 22 this year! May 06, 2024 pm 04:13 PM

The latest video of Tesla's robot Optimus is released, and it can already work in the factory. At normal speed, it sorts batteries (Tesla's 4680 batteries) like this: The official also released what it looks like at 20x speed - on a small "workstation", picking and picking and picking: This time it is released One of the highlights of the video is that Optimus completes this work in the factory, completely autonomously, without human intervention throughout the process. And from the perspective of Optimus, it can also pick up and place the crooked battery, focusing on automatic error correction: Regarding Optimus's hand, NVIDIA scientist Jim Fan gave a high evaluation: Optimus's hand is the world's five-fingered robot. One of the most dexterous. Its hands are not only tactile

DualBEV: significantly surpassing BEVFormer and BEVDet4D, open the book! DualBEV: significantly surpassing BEVFormer and BEVDet4D, open the book! Mar 21, 2024 pm 05:21 PM

This paper explores the problem of accurately detecting objects from different viewing angles (such as perspective and bird's-eye view) in autonomous driving, especially how to effectively transform features from perspective (PV) to bird's-eye view (BEV) space. Transformation is implemented via the Visual Transformation (VT) module. Existing methods are broadly divided into two strategies: 2D to 3D and 3D to 2D conversion. 2D-to-3D methods improve dense 2D features by predicting depth probabilities, but the inherent uncertainty of depth predictions, especially in distant regions, may introduce inaccuracies. While 3D to 2D methods usually use 3D queries to sample 2D features and learn the attention weights of the correspondence between 3D and 2D features through a Transformer, which increases the computational and deployment time.

See all articles