Principal component analysis example in Python

王林
Release: 2023-06-10 08:19:53
Original
978 people have browsed it

Principal Component Analysis Example in Python

Principal Component Analysis (PCA) is a method commonly used for data dimensionality reduction. It can reduce the dimensionality of high-dimensional data to low dimensions, retaining all the data. Possibly more data variation information. Python provides many libraries and tools for implementing PCA. This article uses an example to introduce how to use the sklearn library in Python to implement PCA.

First, we need to prepare a data set. This article will use the Iris data set, which contains 150 sample data. Each sample has 4 feature values ​​​​(the length and width of the calyx, the length and width of the petals), and a label (the type of iris flower). Our goal is to reduce the dimensionality of these four features and find the most important principal components.

First, we need to import the necessary libraries and data sets.

from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt

iris = load_iris()
X = iris.data
y = iris.target
Copy after login

Now we can create a PCA object and apply it.

pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)
Copy after login

The PCA object here sets n_components=2, which means that we only want to display our processed data on a two-dimensional plane. We apply fit_transform to the original data X and obtain the processed data set X_pca.

Now we can plot the results.

plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y)
plt.xlabel('Component 1')
plt.ylabel('Component 2')
plt.show()
Copy after login

In this figure, we can see the distribution of the Iris data set in the two-dimensional space after dimensionality reduction. Each dot represents a sample of an iris flower, and the color indicates the type of iris flower.

Now let’s see what the principal components should be.

print(pca.components_)
Copy after login

This will output two vectors called "Component 1" and "Component 2".

[[ 0.36158968 -0.08226889 0.85657211 0.35884393]
[-0.65653988 -0.72971237 0.1757674 0.07470647]]

Each element represents the weight of a feature in the original data. In other words, we can think of principal components as vectors used to linearly combine the original features. Each vector in the result is a unit vector.

We can also look at the amount of variance in the data explained by each component.

print(pca.explained_variance_ratio_)
Copy after login

This output will show the proportion of the variance in the data explained by each component.

[0.92461621 0.05301557]

We can see that these two components explain a total of 94% of the variance in the data. This means we can capture the characteristics of the data very accurately.

One thing to note is that PCA will remove all features from the original data. Therefore, if we need to retain certain features, we need to remove them manually before applying PCA.

This is an example of how to implement PCA using the sklearn library in Python. PCA can be applied to all types of data and helps us discover the most important components from high-dimensional data. If you can understand the code in this article, you will also be able to apply PCA on your own data sets.

The above is the detailed content of Principal component analysis example in Python. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template