Introduction to Pandas deduplication method: Learn to use these techniques to make data cleaner, specific code examples are required
Overview:
In data analysis and processing, We often encounter situations where we need to deal with duplicate data. The existence of duplicate data may lead to bias in analysis results, so deduplication is a very important and basic data processing operation. Pandas provides a variety of deduplication methods. This article will briefly introduce the commonly used techniques and provide some specific code examples.
Method 1: drop_duplicates()
Pandas’s drop_duplicates() method is one of the most commonly used deduplication methods. It can remove duplicate rows from data based on specified columns. By default, this method retains the first occurrence of a duplicate value and deletes subsequent occurrences of the duplicate value. The following is a code example:
import pandas as pd
data = {'A': [1, 2, 3, 4 , 4, 5, 6],
'B': ['a', 'b', 'c', 'd', 'd', 'e', 'f']}
df = pd.DataFrame(data)
df.drop_duplicates(inplace=True )
print(df)
Run the above code and you will get a DataFrame with duplicate rows removed.
Method 2: duplicated() and ~ operator
In addition to the drop_duplicates() method, we can also use the duplicated() method to determine whether each row is a duplicate row, and then use the ~ operator to invert it Select non-duplicate rows. The following is a code example:
import pandas as pd
data = {'A': [1, 2, 3, 4 , 4, 5, 6],
'B': ['a', 'b', 'c', 'd', 'd', 'e', 'f']}
df = pd.DataFrame(data)
df = df[ ~df.duplicated()]
print(df)
Run the above code and you will get the same result as the previous method one.
Method 3: subset parameter
The drop_duplicates() method also provides a subset parameter, which can specify one or more columns to determine duplicate rows. The following is a code example:
import pandas as pd
data = {'A': [1, 2, 3, 4 , 4, 5, 6],
'B': ['a', 'b', 'c', 'd', 'd', 'e', 'f'], 'C': ['x', 'y', 'y', 'z', 'z', 'y', 'z']}
df = pd.DataFrame(data)
df.drop_duplicates(subset= ['A', 'B'], inplace=True)
print(df)
Run the above code and you will get the result of removing duplicate rows based on columns 'A' and 'B' .
Method 4: keep parameter
The keep parameter of the drop_duplicates() method can be set to 'last', thereby retaining the last of the duplicate values. The following is a code example:
import pandas as pd
data = {'A': [1, 2, 3, 4 , 4, 5, 6],
'B': ['a', 'b', 'c', 'd', 'd', 'e', 'f']}
df = pd.DataFrame(data)
df.drop_duplicates(keep= 'last', inplace=True)
print(df)
Run the above code and you will get the result of retaining the last duplicate value.
Method 5: Use primary key to remove duplicates
When processing a DataFrame containing multiple columns, we can use the set_index() method to set one or more columns as the primary key, and then use the drop_duplicates() method to remove duplicates OK. The following is a code example:
import pandas as pd
data = {'A': [1, 2, 3, 4 , 4, 5, 6],
'B': ['a', 'b', 'c', 'd', 'd', 'e', 'f'], 'C': ['x', 'y', 'y', 'z', 'z', 'y', 'z']}
df = pd.DataFrame(data)
df.set_index(['A', 'B'], inplace=True)
df = df[~df.index.duplicated()]
print(df)
Run the above code and you will get the result of removing duplicate rows based on columns 'A' and 'B'.
Summary:
This article briefly introduces several commonly used deduplication methods in Pandas, including the drop_duplicates() method, duplicated() and ~ operator, subset parameter, keep parameter and the use of primary key deduplication. method. By learning and flexibly applying these techniques, we can process repeated data more conveniently, make the data cleaner, and provide a reliable foundation for subsequent data analysis and processing. I hope this article will be helpful to you in the process of learning Pandas.
The above is the detailed content of Learn these techniques to make your data tidier: a brief introduction to Pandas' duplication method. For more information, please follow other related articles on the PHP Chinese website!