Assume that the Hadoop installation directory HADOOP_HOME is /home/admin/hadoop.
Startup and shutdown
Start Hadoop
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/start-all.sh
Close Hadoop
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/stop-all.sh
File operation
Hadoop uses HDFS, and the functions it can achieve are similar to the disk system we use. And supports wildcard characters, such as *.
View the file list
View the files in the /user/admin/aaron directory in hdfs.
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop fs -ls /user/admin/aaron
In this way, we will find the files in the /user/admin/aaron directory in hdfs.
We can also list all files in the /user/admin/aaron directory in hdfs (including files in subdirectories).
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop fs -lsr /user/admin/aaron
Create a file directory
View the /user/admin/aaron directory in hdfs and create a new directory called newDir.
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop fs -mkdir /user/admin/aaron/newDir
Delete file
Delete a file named needDelete in the /user/admin/aaron directory in hdfs
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop fs -rm /user/admin/aaron/needDelete
Delete the /user/admin/aaron directory in hdfs and all files in this directory
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop fs -rmr /user/admin/aaron
Upload files
Upload a file from local/home/admin/newFile to the /user/admin/aaron directory in hdfs
1. Enter HADOOP_HOME Table of contents.
2. Execute sh bin/hadoop fs –put /home/admin/newFile /user/admin/aaron/
Download file
Download the newFile file in the /user/admin/aaron directory in hdfs to the local /home/admin/newFile
1. Enter HADOOP_HOME directory.
2. Execute sh bin/hadoop fs –get /user/admin/aaron/newFile /home/admin/newFile
View files
We can view files directly in hdfs, the function is similar to the class cat
View the /user/admin/aaron directory in hdfs newFile file
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop fs –cat /home/admin/newFile
MapReduce Job Operation
Submit MapReduce Job
In principle, all Hadoop MapReduce Jobs are a jar package.
Run a MapReduce Job of /home/admin/hadoop/job.jar
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop jar /home/admin/hadoop/job.jar [jobMainClass] [jobArgs]
Kill a running Job
Assume that the Job_Id is: job_201005310937_0053
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop job -kill job_201005310937_0053
More Hadoop commands
The Hadoop operation commands introduced above are our most commonly used. If you want to know more, you can get the command description as follows.
1. Enter the HADOOP_HOME directory.
2. Execute sh bin/hadoop
We can see more command information: