在此強調:
Hadoop,zookpeer,spark,kafka已經正常啟動
開始安裝部署hive
基礎依賴環境:
1,jdk 1.6+ 2, hadoop 2.x 3,hive 0.13-0.19 4,mysql (mysql-connector-jar)
一、開始安裝:
1,下載:
#java export JAVA_HOME=/soft/jdk1.7.0_79/ export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar #bin export PATH=$PATH:/$JAVA_HOME/bin:$HADOOP_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:/usr/local/hadoop/hive/bin #hadoop export HADOOP_HOME=/usr/local/hadoop/hadoop #scala export SCALA_HOME=/usr/local/hadoop/scala #spark export SPARK_HOME=/usr/local/hadoop/spark #hive export HIVE_HOME=/usr/local/hadoop/hive
解壓縮:
https://hive.apache.org/downloads.html
2,修改預設檔
tar xvf apache-hive-2.1.0-bin.tar.gz -C /usr/local/hadoop/ cd /usr/local/hadoop/ mv apache-hive-2.1.0 hive
修改 將含有"system:java.io.tmpdir"的設定項目的值修改為如上位址/tmp/hive 二、安裝好mysql,並且啟動1.建立資料庫
三,初始化hive
修改启动环境 cd /usr/local/hadoop/hive vim bin/hive-config.sh #java export JAVA_HOME=/soft/jdk1.7.0_79/ #hadoop export HADOOP_HOME=/usr/local/hadoop/hadoop #hive export HIVE_HOME=/usr/local/hadoop/hive
四、啟動
cd /usr/local/hadoop/hive vim conf/hive-site.xml <configuration> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://master:3306/hive?createDatabaseInfoNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> <description>Username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>xujun</value> <description>password to use against metastore database</description> </property> </configuration>