wget --no-check-certificate https://repo.huaweicloud.com/java/jdk/8u151-b12/jdk-8u151-linux-x64.tar.gz
tar -zxvf jdk-8u151-linux-x64.tar.gz
mv jdk1.8.0_151/ /usr/java8
echo 'export JAVA_HOME=/usr/java8' >> /etc/profile echo 'export PATH=$PATH:$JAVA_HOME/bin' >> /etc/profile source /etc/profile
java -version
Note: To download the Hadoop installation package, you can choose Huawei source (the speed is medium, acceptable, the focus is on the full version), Tsinghua source (3.0.0 or above The version download speed is too slow and there are few versions), Beijing Foreign Studies University source (the download speed is very fast, but there are few versions) - I personally tested it
wget --no-check-certificate https://repo.huaweicloud.com/apache/hadoop/common/hadoop-3.1.3/hadoop-3.1.3.tar.gz
tar -zxvf hadoop-3.1.3.tar.gz -C /opt/ mv /opt/hadoop-3.1.3 /opt/hadoop
echo 'export HADOOP_HOME=/opt/hadoop/' >> /etc/profile echo 'export PATH=$PATH:$HADOOP_HOME/bin' >> /etc/profile echo 'export PATH=$PATH:$HADOOP_HOME/sbin' >> /etc/profile source /etc/profile
echo "export JAVA_HOME=/usr/java8" >> /opt/hadoop/etc/hadoop/yarn-env.sh echo "export JAVA_HOME=/usr/java8" >> /opt/hadoop/etc/hadoop/hadoop-env.sh
hadoop version
If version information is returned, the installation is successful.
a. Execute the following command to enter the editing page.
vim /opt/hadoop/etc/hadoop/core-site.xml
b. Enter i to enter edit mode. c. Insert the following content into the <configuration></configuration> node.
<property> <name>hadoop.tmp.dir</name> <value>file:/opt/hadoop/tmp</value> <description>location to store temporary files</description> </property> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property>
d. Press the Esc key to exit the editing mode, enter: wq to save and exit.
a. Execute the following command to enter the editing page.
vim /opt/hadoop/etc/hadoop/hdfs-site.xml
b. Enter i to enter edit mode. c. Insert the following content into the <configuration></configuration> node.
<property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/opt/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/opt/hadoop/tmp/dfs/data</value> </property>
d. Press the Esc key to exit the editing mode, enter: wq to save and exit.
ssh-keygen -t rsa
cd ~ cd .ssh cat id_rsa.pub >> authorized_keys
If an error is reported, perform the following operations and then re-execute the above two commands; if no error is reported, go directly to step five:
Enter the following command in the environment variable Add the following configuration
vi /etc/profile
Then add the following content to it
export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root
Enter the following command to make the changes take effect
source /etc/profile
hadoop namenode -format
start-dfs.sh
If Y/N is selected, select Y; otherwise press Enter directly
start-yarn.sh
jps
Normally there will be 6 processes;
The above is the detailed content of How to install Hadoop in linux. For more information, please follow other related articles on the PHP Chinese website!