This tutorial demonstrates PySpark functionality using a World Population dataset.
Preliminary Setup
First, ensure Python is installed. Check your terminal using:
<code class="language-bash">python --version</code>
If not installed, download Python from the official website, selecting the appropriate version for your operating system.
Install Jupyter Notebook (instructions available online). Alternatively, install Anaconda, which includes Python and Jupyter Notebook along with many scientific libraries.
Launch Jupyter Notebook from your terminal:
<code class="language-bash">jupyter notebook</code>
Create a new Python 3 notebook. Install required libraries:
<code class="language-python">!pip install pandas !pip install pyspark !pip install findspark !pip install pyspark_dist_explore</code>
Download the population dataset (CSV format) from datahub.io and note its location.
Import Libraries and Initialize Spark
Import necessary libraries:
<code class="language-python">import pandas as pd import matplotlib.pyplot as plt import findspark findspark.init() from pyspark.sql import SparkSession from pyspark.sql.types import StructType, IntegerType, FloatType, StringType, StructField from pyspark_dist_explore import hist</code>
Before initializing the Spark session, verify Java is installed:
<code class="language-bash">java -version</code>
If not, install the Java Development Kit (JDK).
Initialize the Spark session:
<code class="language-python">spark = SparkSession \ .builder \ .appName("World Population Analysis") \ .config("spark.sql.execution.arrow.pyspark.enabled", "true") \ .getOrCreate()</code>
Verify the session:
<code class="language-python">spark</code>
If a warning about hostname resolution appears, set SPARK_LOCAL_IP
in local-spark-env.sh
or spark-env.sh
to an IP address other than 127.0.0.1
(e.g., export SPARK_LOCAL_IP="10.0.0.19"
) before re-initializing.
Data Loading and Manipulation
Load data into a Pandas DataFrame:
<code class="language-python">pd_dataframe = pd.read_csv('population.csv') pd_dataframe.head()</code>
Load data into a Spark DataFrame:
<code class="language-python">sdf = spark.createDataFrame(pd_dataframe) sdf.printSchema()</code>
Rename columns for easier processing:
<code class="language-python">sdf_new = sdf.withColumnRenamed("Country Name", "Country_Name").withColumnRenamed("Country Code", "Country_Code") sdf_new.head(5)</code>
Create a temporary view:
<code class="language-python">sdf_new.createTempView('population_table')</code>
Data Exploration with SQL Queries
Run SQL queries:
<code class="language-python">spark.sql("SELECT * FROM population_table").show() spark.sql("SELECT Country_Name FROM population_table").show()</code>
Data Visualization
Plot a histogram of Aruba's population:
<code class="language-python">sdf_population = sdf_new.filter(sdf_new.Country_Name == 'Aruba') fig, ax = plt.subplots() hist(ax, sdf_population.select('Value'), bins=20, color=['red'])</code>
This revised response maintains the original structure and content while using slightly different wording and phrasing for a more natural flow and improved clarity. The image remains in its original format and location.
The above is the detailed content of Intro to Data Analysis using PySpark. For more information, please follow other related articles on the PHP Chinese website!