Enables any executable program that supports standard IO (stdin, stdout) to become a hadoop mapper or reducer. For example:
In this example, the cat and wc tools that come with Unix/Linux are used as mapper/reducer. Isn’t it amazing?
If you are used to using some dynamic languages, use dynamic languages to write mapreduce. It is no different from previous programming. Hadoop is just a framework to run it. Let me demonstrate how to use PHP to implement mapreduce of Word Counter.
1. Find the Streaming jar
There is no hadoop-streaming.jar in the Hadoop root directory. Because streaming is a contrib, you have to find it under the contrib. Taking hadoop-0.20.2 as an example, it is here:
2. Write Mapper
Create a new wc_mapper.php and write the following code:
The general meaning of this code is: find the words in each line of input text and output it in the form of "
hello 1
world 1"
.
It’s basically no different from the PHP I wrote before, right? There are two things that may make you feel a little strange:
PHP as an executable program
The "#!/usr/bin/php" in the first line tells Linux to use the program /usr/bin/php as the interpreter for the following code. People who have written Linux shells should be familiar with this writing method. The first line of every shell script is like this: #!/bin/bash, #!/usr/bin/python
With this line, after saving the file, you can directly execute wc_mapper.php as cat and grep commands like this: ./wc_mapper.php
Use stdin to receive input
PHP supports multiple methods of passing in parameters. The most familiar ones should be to get the parameters passed through the Web from the $_GET, $_POST super global variables, and the second is to get the parameters passed from $_SERVER['argv'] Parameters passed in from the command line. Here, the standard input stdin
is used.The effect of its use is:
Enter ./wc_mapper.php in the linux console
wc_mapper.php runs, and the console enters the state of waiting for user keyboard input
User enters text via keyboard
The user presses Ctrl + D to terminate the input, wc_mapper.php starts executing the real business logic and outputs the execution results
So where is stdout? Print itself is already stdout, which is no different from when we wrote web programs and CLI scripts before.
3. Write Reducer
Create a new wc_reducer.php and write the following code:
The main idea of this code is to count how many times each word appears and output it in the form of "
hello 2
world 1"
.
4. Use Hadoop to run
Upload the sample text to be counted
Execute PHP mapreduce program in Streaming mode
The input and output directories are paths on HDFS
The mapper and reducer are paths on the local machine. Be sure to write absolute paths, do not write relative paths, otherwise Hadoop will report an error saying that the mapreduce program cannot be found.
View results
5. Shell version of Hadoop MapReduce program
# Load configuration file
source './config.sh'
# Process command line parameters
while getopts "d:" arg
do
case $arg in
d)
date=$OPTARG
?)
been have – echo "unkonw argument"
# The default processing date is yesterday
default_date=`date -v-1d +%Y-%m-%d`
date=${date:-${default_date}}
if ! [[ "$date" =~ [12][0- 9]{3}-(0[1-9]|1[12])-(0[1-9]|[12][0-9]|3[01]) ]]
echo "invalid date(yyyy-mm-dd): $date"
exit 1
fi
# Files to be processed
log_files=$(${hadoop_home}bin/hadoop fs -ls ${log_file_dir_in_hdfs} | awk '{print $8}' | grep $date)
log_files_amount=$(($(echo $log_files | wc -l) + 0))
if [ $log_files_amount -lt 1 ]
echo "no log files found"
exit 0
fi
# Input file list
for f in $log_files
do
done
function map_reduce () {
if ${hadoop_home}bin/hadoop jar ${streaming_jar_path} -input${input_files_list} -output ${mapreduce_output_dir}${date}/${1}/ -mapper "$ {mapper} ${1}" -reducer "${reducer}" -file "${mapper}"
then
else
exit 1
fi
}
# Loop through each bucket
for bucket in ${bucket_list[@]}
do
done
www.bkjia.com