Introduction to random data generation methods for machine learning algorithms

高洛峰
Release: 2017-03-19 16:57:20
Original
1928 people have browsed it

In the process of learning machine learning algorithms, we often need data to verify the algorithm and debug parameters. But finding a set of data samples that are well suited for a particular algorithm type is not so easy. Fortunately, numpy and scikit-learn both provide random data generation functions. We can generate data suitable for a certain model ourselves, use random data to clean, normalize, transform, and then select a model. Do fitting and prediction with algorithms. The following is a summary of how scikit-learn and numpy generate data samples.

1. Numpy random data generationAPI

numpy is more suitable for producing some simple sampling data. The APIs are all in the random class. Common APIs are:

1) rand(d0, d1, ..., dn) is used to generate a d0xd1x...dn-dimensional array. The value of the array is between [0,1]

For example: np.random.rand(3,2,2), output the following 3x2x2 array

array([[[ 0.49042678,  0.60643763],
        [ 0.18370487,  0.10836908]],
        [[ 0.38269728,  0.66130293],
        [ 0.5775944 ,  0.52354981]],
        [[ 0.71705929,  0.89453574],
        [ 0.36245334,  0.37545211]]])  
Copy after login


2) randn((d0, d1, ..., dn), is also used to generate a d0xd1x...dn-dimensional array. However, the value of the array obeys the standard normal distribution of N(0,1).

For example: np.random.randn(3,2), output the following 3x2 array, these values ​​​​are sampling data of N(0,1)

array([[-0.5889483 , -0.34054626],
       [-2.03094528, -0.21205145],
       [-0.20804811, -0.97289898]])
Copy after login

If you need to obey N(μ. ,σ2)N(μ,σ2) normal distribution, you only need to transform σx+μσx+μ on each generated value x on randn, for example:


For example: 2*np.random.randn(3,2) + 1, output the following 3x2 array, these values ​​​​are sampled data of N(1,4)

array([[ 2.32910328, -0.677016  ],
       [-0.09049511,  1.04687598],
       [ 2.13493001,  3.30025852]])
Copy after login

3)randint(. low[, high, size]), generate random data of size size, size can be integer, which is the dimension of a matrix or the dimension of a tensor. Values ​​lie in the semi-open interval [low, high).


For example: np.random.randint(3, size=[2,3,4]) returns data with dimension 2x3x4. The value range is an integer with a maximum value of 3.

array([[[2, 1, 2, 1],
   [0, 1, 2, 1],
   [2, 1, 0, 2]],
   [[0, 1, 0, 0],
   [1, 1, 2, 1],
   [1, 0, 1, 2]]])
Copy after login

Another example: np.random.randint(3, 6, size=[2,3]) returns data with a dimension of 2x3. The value range is [3,6).

array([[4, 5, 3],
   [3, 4, 5]])
Copy after login

4) random_integers(low[, high, size]), similar to randint above, the difference is that the value range is a closed interval [low, high] .


5) random_sample([size]), returns a random floating point number in the half-open interval [0.0, 1.0). If it is other intervals [a,b), it can be converted (b - a) * random_sample([size]) + a

For example: (5-2)*np.random.random_sample(3)+2 Returns 3 random numbers between [2,5).

array([ 2.87037573,  4.33790491,  2.1662832 ])
Copy after login

2. Introduction to scikit-learn’s random data generation API

scikit-learn’s API for generating random data is in the datasets class. Compared with numpy, it can be used to generate data suitable for specific machines. Data for learning models. Commonly used APIs are:

1) Use make_regression to generate regression model data

2) Use make_hastie_10_2, make_classification or make_multilabel_classification to generate classification model data

3) Use make_blobs to generate clusters Class model data

4) Use make_gaussian_quantiles to generate grouped multi-dimensional normal distributed data

3. scikit-learn random data generation example

3.1 Regression model random data

Here we use make_regression to generate regression model data. Several key parameters are n_samples (number of generated samples), n_features (number of sample features), noise (sample random noise) and coef (whether to return regression coefficients). The example code is as follows:

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets.samples_generator import make_regression
# X为样本特征,y为样本输出, coef为回归系数,共1000个样本,每个样本1个特征
X, y, coef =make_regression(n_samples=1000, n_features=1,noise=10, coef=True)
# 画图
plt.scatter(X, y,  color='black')
plt.plot(X, X*coef, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
Copy after login

The output picture is as follows:

Introduction to random data generation methods for machine learning algorithms

3.2 Classification model random data

Here we use make_classification to generate ternary classification model data. Several key parameters include n_samples (number of generated samples), n_features (number of sample features), n_redundant (number of redundant features) and n_classes (number of output categories). The example code is as follows:

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets.samples_generator import make_classification
# X1为样本特征,Y1为样本类别输出, 共400个样本,每个样本2个特征,输出有3个类别,没有冗余特征,每个类别一个簇
X1, Y1 = make_classification(n_samples=400, n_features=2, n_redundant=0,
                             n_clusters_per_class=1, n_classes=3)
plt.scatter(X1[:, 0], X1[:, 1], marker='o', c=Y1)
plt.show()
Copy after login


The output graph is as follows:

Introduction to random data generation methods for machine learning algorithms

##3.3 Clustering model random data

Here we use make_blobs to generate clustering model data. Several key parameters include n_samples (number of generated samples), n_features (number of sample features), centers (number of cluster centers or customized cluster centers) and cluster_std (cluster data variance, representing the degree of cluster aggregation). The example is as follows:

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets.samples_generator import make_blobs
# X为样本特征,Y为样本簇类别, 共1000个样本,每个样本2个特征,共3个簇,簇中心在[-1,-1], [1,1], [2,2], 簇方差分别为[0.4, 0.5, 0.2]
X, y = make_blobs(n_samples=1000, n_features=2, centers=[[-1,-1], [1,1], [2,2]], cluster_std=[0.4, 0.5, 0.2])
plt.scatter(X[:, 0], X[:, 1], marker='o', c=y)
plt.show()
Copy after login


The output picture is as follows:

Introduction to random data generation methods for machine learning algorithms

3.4 分组正态分布混合数据

我们用make_gaussian_quantiles生成分组多维正态分布的数据。几个关键参数有n_samples(生成样本数), n_features(正态分布的维数),mean(特征均值), cov(样本协方差的系数), n_classes(数据在正态分布中按分位数分配的组数)。 例子如下:

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import make_gaussian_quantiles
#生成2维正态分布,生成的数据按分位数分成3组,1000个样本,2个样本特征均值为1和2,协方差系数为2
X1, Y1 = make_gaussian_quantiles(n_samples=1000, n_features=2, n_classes=3, mean=[1,2],cov=2)
plt.scatter(X1[:, 0], X1[:, 1], marker='o', c=Y1)
Copy after login


输出图如下

Introduction to random data generation methods for machine learning algorithms

以上就是生产随机数据的一个总结,希望可以帮到学习机器学习算法的朋友们。

The above is the detailed content of Introduction to random data generation methods for machine learning algorithms. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template