Wasserstein distance, also known as Earth Mover's Distance, is a mathematical method used to measure the distance between two probability distributions. Compared with traditional distance measurement methods such as Euclidean distance, Wasserstein distance more comprehensively considers the similarity between distributions and the relationship between geometric distances, making it more suitable for describing the similarity of high-dimensional data sets. Wasserstein distance is calculated by the minimum total cost required to transform one distribution into another. This cost can be interpreted as the effort required to move mass in a distribution from one location to another. Therefore, the Wasserstein distance can be viewed as the cost of mass transfer between two distributions. This makes Wasserstein distance widely used in many fields, including image processing, natural language processing, economics, etc. By considering the similarity and geometric distance between distributions
The definition of Wasserstein distance is based on minimizing the cost required to transform one distribution into another. This cost can be arbitrary, but usually refers to the cost of moving mass from one location to another in a distribution, which can be expressed as the product of the distance between the two locations and the mass. The value of Wasserstein distance is equal to the minimum value of the cost of all possible transformation solutions.
Mathematically, the Wasserstein distance can be defined as:
W_p(\mu,\nu)=\left(\inf_{\ gamma\in\Gamma(\mu,\nu)}\int_{\mathbb{R}^d \times \mathbb{R}^d} |x-y|^p d\gamma(x,y)\right)^{ 1/p}
Among them, \mu and \nu are two probability distributions, \Gamma(\mu,\nu) are all probability distributions that convert \mu to \nu A set of \gamma(x,y) represents the conversion probability corresponding to (x,y). In the Wasserstein distance, p \geq 1 is a constant, usually p=1 or p=2. When p=1, the Wasserstein distance is also called the Earth Mover's Distance because it can be viewed as a measure of the minimum number of operations required to move one distribution to another.
In order to better understand the concept of Wasserstein distance, we can consider a simple example: Suppose we have two one-dimensional probability distributions P and Q, which are in the interval [0 ,1] and [0.5,1.5] uniformly distributed. We can use Python and Scipy libraries to calculate the Wasserstein distance between them.
import numpy as np from scipy.stats import wasserstein_distance # 定义两个概率分布 P 和 Q P = np.ones(100) / 100 Q = np.ones(100) / 100 Q[50:] = 0 # 计算它们之间的Wasserstein distance w_dist = wasserstein_distance(P, Q) print("Wasserstein distance:", w_dist)
In this example, we use the numpy library to generate two 100-element probability distributions, both of which are uniformly distributed. Then, we set the last 50 elements of the second distribution Q to 0 to simulate its distribution on the interval [0.5,1]. Finally, we calculate the Wasserstein distance between them using the wasserstein_distance function in the Scipy library. After running the code, we can get the output:
Wasserstein distance: 0.5
This means that the minimum cost required to transform distribution P into distribution Q is 0.5. In this example, we can interpret it as the minimum distance required to move a mound of length 0.5 into a pit of length 0.5.
In short, Wasserstein distance is a method used to measure the distance between two probability distributions, which takes into account the relationship between the similarity and geometric distance between distributions. It has many applications, such as loss functions in generative adversarial networks (GAN) and similarity measures in image retrieval.
The above is the detailed content of Wasserstein distance. For more information, please follow other related articles on the PHP Chinese website!