The library needed for data cleaning here is the pandas library. The download method is still running in the terminal: pip install pandas.
First we need to read the data
import pandas as pd data = pd.read_csv(r'E:\PYthon\用户价值分析 RFM模型\data.csv') pd.set_option('display.max_columns', 888) # 大于总列数 pd.set_option('display.width', 1000) print(data.head()) print(data.info())
Line 3 It is to read the data. There is a read function call in the pandas library. The csv format is the fastest to read and write.
Lines 4 and 5 are for displaying all the columns when reading, because if there are many columns, pycharm will hide some of the middle columns, so we add these two lines of code to prevent them from being hidden.
The 6th line displays the table header. We can see what fields there are and the column names.
The 7th line displays the basic information of the table. How much data is in each column and what type of field is it? The data. How much non-empty data is there, so in the first step we can see which basic column has a null value.
After data.info() we can see that most of the data has 541909 rows, so we roughly guess it is Description, The CustomerID column is missing results
# 空值处理 print(data.isnull().sum()) # 空值中和,查看每一列的空值 # 空值删除 data.drop(columns=['Description'], inplace=True) print(data.info()) data.isnull()判断是否为空。data.isnumll().sum()计算空值数量。
Line 5 deletes the null value. Here, delete the null value of the Description column first. Inplace=True means to modify the data. If there is no inplace=True, the data will not be modified. , the print data is still the same as before, or a variable is redefined for assignment.
Since there are relatively few null values in this column, this column of data is not that important to our data analysis, so we choose to delete this entire column.
Our table is used to filter customers, so CustomerID is used as the standard and other columns are forced to be deleted.
# CustomerID有空值 # 删除所有列的空值 data.dropna(inplace=True) # print(data.info()) print(data.isnull().sum()) # 由于CustomerID为必须字段,所以强制删除其他列,以CustomerID为准
Here we first perform type conversion on other fields
Type conversion
# 转换为日期类型 data['InvoiceDate'] = pd.to_datetime(data['InvoiceDate']) # CustomerID 转换为整型 data['CustomerID'] = data['CustomerID'].astype('int') print(data.info())
We have dealt with null values above, and next we deal with abnormal values.
To view the basic data distribution of the table, you can use describe
print(data.describe())
You can see that the minimum value in the data Quantity column is -80995. This column obviously has abnormal values , so this column needs to be filtered for outliers.
Only values greater than 0 are required.
data = data[data['Quantity'] > 0] print(data)
When printed, there are only 397924 lines.
# 查看重复值 print(data[data.duplicated()])
There are 5194 rows of duplicate values. The duplicate values here are completely duplicated, so we can delete them as useless data. .
# 删除重复值 data.drop_duplicates(inplace=True) print(data.info())
Save the original table after deletion, and then check the basic information of the table
It’s still there now There are 392730 pieces of data left. At this step, data cleaning is completed.
The above is the detailed content of What is the data cleaning method in Python?. For more information, please follow other related articles on the PHP Chinese website!