Home > Database > Mysql Tutorial > How to Efficiently Filter PySpark DataFrames Using an IN Clause?

How to Efficiently Filter PySpark DataFrames Using an IN Clause?

Patricia Arquette
Release: 2024-12-28 21:57:11
Original
370 people have browsed it

How to Efficiently Filter PySpark DataFrames Using an IN Clause?

Handling Pyspark DataFrame Filtering with an IN Clause

Filtering a Pyspark DataFrame with a SQL-like IN clause can be achieved with string formatting.

In the given example:

sc = SparkContext()
sqlc = SQLContext(sc)
df = sqlc.sql('SELECT * from my_df WHERE field1 IN a')
Copy after login

Strings passed to SQLContext are evaluated in the SQL environment and do not capture closures. To pass variables explicitly, use string formatting:

df.registerTempTable("df")
sqlContext.sql("SELECT * FROM df WHERE v IN {0}".format(("foo", "bar"))).count()
Copy after login

Alternatively, the DataFrame DSL provides a better option for dynamic queries:

from pyspark.sql.functions import col

df.where(col("v").isin({"foo", "bar"})).count()
Copy after login

The above is the detailed content of How to Efficiently Filter PySpark DataFrames Using an IN Clause?. For more information, please follow other related articles on the PHP Chinese website!

source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Articles by Author
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template