目錄
5台服务器设计图
--  配置Linux、安装JDK
-- Step 1. 建立用户hadoop的ssh无密码登陆
-- Step 3. Hadoop集群配置:
-- Step 3.1 vi $HADOOP_HOME/etc/hadoop/slaves
-- Step 3.2 vi $HADOOP_HOME/etc/hadoop/hadoop-env.sh  (添加 JAVA_HOME 环境变量、本地library库)
-- Step 3.3 vi $HADOOP_HOME/etc/hadoop/core-site.xml
-- Step 3.4 vi $HADOOP_HOME/etc/hadoop/hdfs-site.xml
-- Step 3.5 vi $HADOOP_HOME/etc/hadoop/mapred-site.xml
-- Step 3.6 vi $HADOOP_HOME/etc/hadoop/yarn-site.xml
-- Step 3.7 vi $HADOOP_HOME/etc/hadoop/fairscheduler.xml
-- Step 4. 创建相关目录
-- Step 5. 启动Zookeeper、JournalNode、格式化Hadoop集群并启动
-- Step 5.1 启动Zooker (ZK集群是funshion-hadoop194、funshion-hadoop195、funshion-hadoop196、funshion-hadoop197、funshion-hadoop198 五台服务器)
-- Step 5.2 启动JournalNode进程(在funshion-hadoop194、funshion-hadoop195、funshion-hadoop196、funshion-hadoop197、funshion-hadoop198五台服务器上分别执行):
-- Step 5.3 格式化Hadoop集群并启动:
-- Step 6. 上传测试数据:
-- Step 7. Hive安装(安装到196机器) (使用Hive与HBase整合安装;使用源码编译安装)
-- Step 7.1 My SQL安装(安装到194机器),并在My SQL中创建名为hive的数据库用以存放hive元数据:
-- Step 7.2 解决hive安装包到/usr/local下,添加hive相关环境变量:
-- Step 7.3 在My SQL数据库的hive数据库中执行创建hive元数据脚本:
-- Step 7.4 修改hive相关配置文件:
-- 7.4.1 修改 $HIVE_HOME/bin/hive-config.sh 文件,添加如下环境变量:
-- 7.4.2 修改 $HIVE_HOME/conf/hive-site.xml 的第2002行:
-- 7.4.3 修改 $HIVE_HOME/conf/hive-site.xml 的如下property:
-- Step 7.5 启动并登录hive,并创建hive表
首頁 資料庫 mysql教程 hadoop2.4.0 ha 搭建

hadoop2.4.0 ha 搭建

Jun 07, 2016 pm 03:43 PM
hadoop 搭建 透過 問題

问题导读: 1、hadoop ha是通过什么配置实现自动切换的? 2、配置中mapred与mapreduce的区别是什么? 3、hadoop ha两个namenode之间的关系是什么? -- hadoop 版本:2.4.0 -- 安装包名: hadoop-2.4.0.tar.gz 或者源码版本 hadoop-2.4.0-src.tar.gz(我hadoop

问题导读:
1、hadoop ha是通过什么配置实现自动切换的?
2、配置中mapred与mapreduce的区别是什么?
3、hadoop ha两个namenode之间的关系是什么?

-- hadoop 版本:2.4.0
-- 安装包名: 
             hadoop-2.4.0.tar.gz 或者源码版本 hadoop-2.4.0-src.tar.gz(我hadoop、hbase、hive均是用的源码编译安装)

-- 安装参考:
http://www.netfoucs.com/article/book_mmicky/79985.html
http://www.byywee.com/page/M0/S934/934356.html
http://www.itpub.net/thread-1631536-1-1.html
http://demo.netfoucs.com/u014393917/article/details/25913363
http://www.aboutyun.com/thread-8294-1-1.html

-- 找不到本地库
           参考:http://www.ercoppa.org/Linux-Com ... -hadoop-library.htm

-- lzo支持,
参考:http://blog.csdn.net/zhangzhaokun/article/details/17595325
http://slaytanic.blog.51cto.com/2057708/1162287/
http://hi.baidu.com/qingchunranzhi/item/3662ed5ed29d37a1adc85709


-- 安装以下RPM包:
yum -y install openssh*
yum -y install man*
yum -y install compat-libstdc++-33*
yum -y install libaio-0.*
yum -y install libaio-devel*
yum -y install sysstat-9.*
yum -y install glibc-2.*
yum -y install glibc-devel-2.* glibc-headers-2.*
yum -y install ksh-2*
yum -y install libgcc-4.*
yum -y install libstdc++-4.*
yum -y install libstdc++-4.*.i686*
yum -y install libstdc++-devel-4.*
yum -y install gcc-4.*x86_64*
yum -y install gcc-c++-4.*x86_64*
yum -y install elfutils-libelf-0*x86_64* elfutils-libelf-devel-0*x86_64*
yum -y install elfutils-libelf-0*i686* elfutils-libelf-devel-0*i686*
yum -y install libtool-ltdl*i686*
yum -y install ncurses*i686*
yum -y install ncurses*
yum -y install readline*
yum -y install unixODBC*
yum -y install zlib
yum -y install zlib*
yum -y install openssl*
yum -y install patch
yum -y install git
yum -y -y install  lzo-devel zlib-devel gcc autoconf automake libtool
yum -y install lzop
yum -y install lrzsz
yum -y -y install  lzo-devel  zlib-devel  gcc autoconf automake libtool
yum -y install nc
yum -y install glibc
yum -y install java-1.7.0-openjdk
yum -y install gzip
yum -y install zlib
yum -y install gcc
yum -y install gcc-c++
yum -y install make
yum -y install protobuf
yum -y install protoc
yum -y install cmake
yum -y install openssl-devel
yum -y install ncurses-devel
yum -y install unzip
yum -y install telnet
yum -y install telnet-server
yum -y install wget
yum -y install svn
yum -y install ntpdate

-- hive 安装,参考:http://kicklinux.com/hive-deploy/

5台服务器设计图

IP地址 主机名 NameNode JournalNode DataNode Zookeeper Hbase Hive
192.168.117.194 funshion-hadoop194
192.168.117.195 funshion-hadoop195
192.168.117.196 funshion-hadoop196 是(Master) 是(Mysql)
192.168.117.197 funshion-hadoop197
192.168.117.198 funshion-hadoop198


--  配置Linux、安装JDK

--参考:linux(ubuntu)安装Java jdk环境变量设置及小程序测试

-- Step 1. 建立用户hadoop的ssh无密码登陆

--参考:
linux(ubuntu)无密码互通、相互登录高可靠文档
CentOS6.4之图解SSH无验证双向登陆配置

-- Step 2. zookeeper配置(配置奇数台zk集群,我用的5台)
-- 参考:Zookeeper集群环境安装过程详解

-- Step 3. Hadoop集群配置:


-- Step 3.1 vi $HADOOP_HOME/etc/hadoop/slaves

funshion-hadoop196
funshion-hadoop197
funshion-hadoop198

-- Step 3.2 vi $HADOOP_HOME/etc/hadoop/hadoop-env.sh  (添加 JAVA_HOME 环境变量、本地library库)

export JAVA_HOME=/usr/java/latest
export LD_LIBRARY_PATH=/usr/local/hadoop/lzo/lib
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib/native"

-- 注意:${HADOOP_PREFIX}/lib/native 下的内容如下:
[hadoop@funshion-hadoop194 native]$ pwd
/usr/local/hadoop/lib/native

[hadoop@funshion-hadoop194 native]$ ls -l
total 8640
-rw-r--r--. 1 hadoop hadoop 2850660 Jun  9 14:58 hadoop-common-2.4.0.jar
-rw-r--r--. 1 hadoop hadoop 1509888 Jun  9 14:58 hadoop-common-2.4.0-tests.jar
-rw-r--r--. 1 hadoop hadoop  178637 Jun  9 14:58 hadoop-lzo-0.4.20-SNAPSHOT.jar
-rw-r--r--. 1 hadoop hadoop  145385 Jun  9 14:58 hadoop-nfs-2.4.0.jar
-rw-r--r--. 1 hadoop hadoop  983042 Jun  6 19:36 libhadoop.a
-rw-r--r--. 1 hadoop hadoop 1487284 Jun  6 19:36 libhadooppipes.a
lrwxrwxrwx. 1 hadoop hadoop      18 Jun  6 19:42 libhadoop.so -> libhadoop.so.1.0.0
-rwxr-xr-x. 1 hadoop hadoop  586664 Jun  6 19:36 libhadoop.so.1.0.0
-rw-r--r--. 1 hadoop hadoop  582040 Jun  6 19:36 libhadooputils.a
-rw-r--r--. 1 hadoop hadoop  298178 Jun  6 19:36 libhdfs.a
lrwxrwxrwx. 1 hadoop hadoop      16 Jun  6 19:42 libhdfs.so -> libhdfs.so.0.0.0
-rwxr-xr-x. 1 hadoop hadoop  200026 Jun  6 19:36 libhdfs.so.0.0.0
drwxrwxr-x. 2 hadoop hadoop    4096 Jun  6 20:37 Linux-amd64-64

-- Step 3.3 vi $HADOOP_HOME/etc/hadoop/core-site.xml

--(注意:fs.default.FS参数在两个namenode节点均一样,即5台机器的core-site.xml文件内容完全一样)



fs.defaultFS
hdfs://mycluster


dfs.ha.fencing.methods
sshfence


dfs.ha.fencing.ssh.private-key-files
/home/hadoop/.ssh/id_rsa_nn2


ha.zookeeper.quorum
funshion-hadoop194:2181,funshion-hadoop195:2181,funshion-hadoop196:2181,funshion-hadoop197:2181,funshion-hadoop198:2181




io.compression.codecs
org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,com.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec


io.compression.codec.lzo.class
com.hadoop.compression.lzo.LzoCodec


io.file.buffer.size
131072


hadoop.tmp.dir
/home/hadoop/tmp
Abase for other temporary directories.


hadoop.proxyuser.hadoop.hosts
*


hadoop.proxyuser.hadoop.groups
*


hadoop.native.lib
true


ha.zookeeper.session-timeout.ms
60000
ms


ha.failover-controller.cli-check.rpc-timeout.ms
60000


ipc.client.connect.timeout
20000




-- 注意:属性值dfs.ha.fencing.ssh.private-key-files的值id_rsa_nn2 是privatekey(即/home/hadoop/.ssh/目录id_rsa文件的拷贝,且权限为600)
       
                dfs.ha.fencing.ssh.private-key-files
                /home/hadoop/.ssh/id_rsa_nn2
       


-- Step 3.4 vi $HADOOP_HOME/etc/hadoop/hdfs-site.xml




dfs.nameservices
mycluster


dfs.ha.namenodes.mycluster
nn1,nn2


dfs.namenode.rpc-address.mycluster.nn1
funshion-hadoop194:8020


dfs.namenode.rpc-address.mycluster.nn2
funshion-hadoop195:8020


dfs.namenode.servicerpc-address.mycluster.nn1
funshion-hadoop194:53310

:q
dfs.namenode.servicerpc-address.mycluster.nn2
funshion-hadoop195:53310


dfs.namenode.http-address.mycluster.nn1
funshion-hadoop194:50070


dfs.namenode.http-address.mycluster.nn2
funshion-hadoop195:50070


dfs.namenode.shared.edits.dir
qjournal://funshion-hadoop194:8485;funshion-hadoop195:8485;funshion-hadoop196:8485;funshion-hadoop197:8485;funshion-hadoop198:8485/mycluster


dfs.journalnode.edits.dir
/home/hadoop/mydata/journal


dfs.client.failover.proxy.provider.mycluster
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider


dfs.ha.automatic-failover.enabled
true




dfs.namenode.name.dir
file:///home/hadoop/mydata/name


dfs.datanode.data.dir
file:///home/hadoop/mydata/data


dfs.replication
2


dfs.image.transfer.bandwidthPerSec
1048576



-- Step 3.5 vi $HADOOP_HOME/etc/hadoop/mapred-site.xml



mapreduce.jobhistory.address
funshion-hadoop194:10020


mapreduce.jobhistory.webapp.address
funshion-hadoop194:19888


mapreduce.map.output.compress
true


mapreduce.map.output.compress.codec
com.hadoop.compression.lzo.LzoCodec


mapred.child.env
LD_LIBRARY_PATH=/usr/local/hadoop/lib/native


mapred.child.java.opts
-Xmx2048m


mapred.reduce.child.java.opts
-Xmx2048m


mapred.map.child.java.opts
-Xmx2048m


mapred.remote.os
Linux
Remote MapReduce framework's OS, can be either Linux or Windows




-- 注意:1、以mapred.开头的形式去指定属性名,都是一种过时的形式,建议使用mapreduce.
            比如:mapred.compress.map.output 属性应该对应修改成:mapreduce.map.output.compress
            具体可以查阅:http://hadoop.apache.org/docs/r2 ... /mapred-default.xml 文件,
      当然,好像还有少量属性名是没有修改的,比如:mapred.child.java.opts、mapred.child.env

-- 注意:/usr/local/hadoop/lib/native 目录下有如下内容:
[hadoop@funshion-hadoop194 sbin]$ ls -l /usr/local/hadoop/lib/native
total 12732
-rw-r--r-- 1 hadoop hadoop 2850900 Jun 20 19:22 hadoop-common-2.4.0.jar
-rw-r--r-- 1 hadoop hadoop 1509411 Jun 20 19:22 hadoop-common-2.4.0-tests.jar
-rw-r--r-- 1 hadoop hadoop  178559 Jun 20 18:38 hadoop-lzo-0.4.20-SNAPSHOT.jar
-rw-r--r-- 1 hadoop hadoop 1407039 Jun 20 19:25 hadoop-yarn-common-2.4.0.jar
-rw-r--r-- 1 hadoop hadoop  106198 Jun 20 18:37 libgplcompression.a
-rw-r--r-- 1 hadoop hadoop    1124 Jun 20 18:37 libgplcompression.la
-rwxr-xr-x 1 hadoop hadoop   69347 Jun 20 18:37 libgplcompression.so
-rwxr-xr-x 1 hadoop hadoop   69347 Jun 20 18:37 libgplcompression.so.0
-rwxr-xr-x 1 hadoop hadoop   69347 Jun 20 18:37 libgplcompression.so.0.0.0
-rw-r--r-- 1 hadoop hadoop  983042 Jun 20 18:10 libhadoop.a
-rw-r--r-- 1 hadoop hadoop 1487284 Jun 20 18:10 libhadooppipes.a
lrwxrwxrwx 1 hadoop hadoop      18 Jun 20 18:27 libhadoop.so -> libhadoop.so.1.0.0
-rwxr-xr-x 1 hadoop hadoop  586664 Jun 20 18:10 libhadoop.so.1.0.0
-rw-r--r-- 1 hadoop hadoop  582040 Jun 20 18:10 libhadooputils.a
-rw-r--r-- 1 hadoop hadoop  298178 Jun 20 18:10 libhdfs.a
lrwxrwxrwx 1 hadoop hadoop      16 Jun 20 18:27 libhdfs.so -> libhdfs.so.0.0.0
-rwxr-xr-x 1 hadoop hadoop  200026 Jun 20 18:10 libhdfs.so.0.0.0
-rw-r--r-- 1 hadoop hadoop  906318 Jun 20 19:17 liblzo2.a
-rwxr-xr-x 1 hadoop hadoop     929 Jun 20 19:17 liblzo2.la
-rwxr-xr-x 1 hadoop hadoop  562376 Jun 20 19:17 liblzo2.so
-rwxr-xr-x 1 hadoop hadoop  562376 Jun 20 19:17 liblzo2.so.2
-rwxr-xr-x 1 hadoop hadoop  562376 Jun 20 19:17 liblzo2.so.2.0.0

-- Step 3.6 vi $HADOOP_HOME/etc/hadoop/yarn-site.xml




yarn.resourcemanager.connect.retry-interval.ms
60000


yarn.resourcemanager.ha.enabled
true


yarn.resourcemanager.cluster-id
rm-cluster


yarn.resourcemanager.ha.rm-ids
rm1,rm2


yarn.resourcemanager.ha.id
rm1


yarn.resourcemanager.hostname.rm1
funshion-hadoop194


yarn.resourcemanager.hostname.rm2
funshion-hadoop195


yarn.resourcemanager.recovery.enabled
true


yarn.resourcemanager.store.class
org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore


yarn.resourcemanager.zk-address
funshion-hadoop194:2181,funshion-hadoop195:2181,funshion-hadoop196:2181,funshion-hadoop197:2181,funshion-hadoop198:2181


yarn.resourcemanager.address.rm1
${yarn.resourcemanager.hostname.rm1}:23140


yarn.resourcemanager.scheduler.address.rm1
${yarn.resourcemanager.hostname.rm1}:23130


yarn.resourcemanager.webapp.https.address.rm1
${yarn.resourcemanager.hostname.rm1}:23189


yarn.resourcemanager.webapp.address.rm1
${yarn.resourcemanager.hostname.rm1}:23188


yarn.resourcemanager.resource-tracker.address.rm1
${yarn.resourcemanager.hostname.rm1}:23125


yarn.resourcemanager.admin.address.rm1
${yarn.resourcemanager.hostname.rm1}:23141




yarn.resourcemanager.address.rm2
${yarn.resourcemanager.hostname.rm2}:23140


yarn.resourcemanager.scheduler.address.rm2
${yarn.resourcemanager.hostname.rm2}:23130


yarn.resourcemanager.webapp.https.address.rm2
${yarn.resourcemanager.hostname.rm2}:23189


yarn.resourcemanager.webapp.address.rm2
${yarn.resourcemanager.hostname.rm2}:23188


yarn.resourcemanager.resource-tracker.address.rm2
${yarn.resourcemanager.hostname.rm2}:23125


yarn.resourcemanager.admin.address.rm2
${yarn.resourcemanager.hostname.rm2}:23141




yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler


yarn.scheduler.fair.allocation.file
${yarn.home.dir}/etc/hadoop/fairscheduler.xml


yarn.nodemanager.local-dirs
/home/hadoop/logs/yarn_local


yarn.nodemanager.log-dirs
/home/hadoop/logs/yarn_log


yarn.nodemanager.remote-app-log-dir
/home/hadoop/logs/yarn_remotelog


yarn.app.mapreduce.am.staging-dir
/home/hadoop/logs/yarn_userstag


mapreduce.jobhistory.intermediate-done-dir
/home/hadoop/logs/yarn_intermediatedone


mapreduce.jobhistory.done-dir
/var/lib/hadoop/dfs/yarn_done




yarn.log-aggregation-enable
true


yarn.nodemanager.resource.memory-mb
2048


yarn.nodemanager.vmem-pmem-ratio
4.2


yarn.nodemanager.resource.cpu-vcores
2


yarn.nodemanager.aux-services
mapreduce_shuffle


yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler


Classpath for typical applications.
yarn.application.classpath

$HADOOP_HOME/etc/hadoop,
$HADOOP_HOME/share/hadoop/common/*,
$HADOOP_HOME/share/hadoop/common/lib/*,
$HADOOP_HOME/share/hadoop/hdfs/*,
$HADOOP_HOME/share/hadoop/hdfs/lib/*,
$HADOOP_HOME/share/hadoop/mapreduce/*,
$HADOOP_HOME/share/hadoop/mapreduce/lib/*,
$HADOOP_HOME/share/hadoop/yarn/*,
$HADOOP_HOME/share/hadoop/yarn/lib/*





-- 注意:两个namenode,funshion-hadoop194直接用上面的配置,
--       funshion-hadoop195的话,只需修改一个地方:修改yarn.resourcemanager.ha.id 属性值为 rm2 

-- Step 3.7 vi $HADOOP_HOME/etc/hadoop/fairscheduler.xml





1024 mb, 1 vcores
1536 mb, 1 vcores
5
300
1.0
root,yarn,search,hdfs


1024 mb, 1 vcores
1536 mb, 1 vcores


1024 mb, 1 vcores
1536 mb, 1 vcores



################################################################################## 

scp -r /usr/local/hadoop/etc/hadoop/* hadoop@funshion-hadoop195:/usr/local/hadoop/etc/hadoop/
scp -r /usr/local/hadoop/etc/hadoop/* hadoop@funshion-hadoop196:/usr/local/hadoop/etc/hadoop/
scp -r /usr/local/hadoop/etc/hadoop/* hadoop@funshion-hadoop197:/usr/local/hadoop/etc/hadoop/
scp -r /usr/local/hadoop/etc/hadoop/* hadoop@funshion-hadoop198:/usr/local/hadoop/etc/hadoop/

-- Step 4. 创建相关目录


mkdir ~/logs
mkdir ~/mydata

-- 备注:mydate目录下的相关子目录会自动生成,不需要创建。
-- 在每台集群机器上创建如上两个目录,并同步 $HADOOP_HOME/etc/hadoop目录下的所有文件到各节点


-- Step 5. 启动Zookeeper、JournalNode、格式化Hadoop集群并启动



-- Step 5.1 启动Zooker (ZK集群是funshion-hadoop194、funshion-hadoop195、funshion-hadoop196、funshion-hadoop197、funshion-hadoop198 五台服务器)


[hadoop@funshion-hadoop194 bin]$ /usr/local/zookeeper/bin/zkServer.sh start
[hadoop@funshion-hadoop195 bin]$ /usr/local/zookeeper/bin/zkServer.sh start
[hadoop@funshion-hadoop196 bin]$ /usr/local/zookeeper/bin/zkServer.sh start
[hadoop@funshion-hadoop197 bin]$ /usr/local/zookeeper/bin/zkServer.sh start
[hadoop@funshion-hadoop198 bin]$ /usr/local/zookeeper/bin/zkServer.sh start

-- 可以如下查看Zookeeper集群各节点的状态:
/usr/local/zookeeper/bin/zkServer.sh status


-- 然后在某一个namenode节点执行如下命令,创建命名空间
[hadoop@funshion-hadoop194 bin]$ cd $HADOOP_HOME
[hadoop@funshion-hadoop194 hadoop]$ ./bin/hdfs zkfc -formatZK

-- 备注:停止zookeeper相关命令类似如下:
/usr/local/zookeeper/bin/zkServer.sh stop
/usr/local/zookeeper/bin/zkServer.sh restart

-- Step 5.2 启动JournalNode进程(在funshion-hadoop194、funshion-hadoop195、funshion-hadoop196、funshion-hadoop197、funshion-hadoop198五台服务器上分别执行):


[hadoop@funshion-hadoop194 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
[hadoop@funshion-hadoop195 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
[hadoop@funshion-hadoop196 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
[hadoop@funshion-hadoop197 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode
[hadoop@funshion-hadoop198 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start journalnode


-- Step 5.3 格式化Hadoop集群并启动:


-- 在 funshion-hadoop194 上执行:
[hadoop@funshion-hadoop194 bin]$ $HADOOP_HOME/bin/hdfs namenode -format mycluster
[hadoop@funshion-hadoop194 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start namenode

-- 上步执行完后,在 funshion-hadoop195 上执行:
[hadoop@funshion-hadoop195 bin]$ $HADOOP_HOME/bin/hdfs namenode -bootstrapStandby
[hadoop@funshion-hadoop195 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start namenode

-- 上步执行完后,可以继续在某个 namenode 执行 $HADOOP_HOME/sbin/start-all.sh 启动datanode及yarn相关进程。

-- 因为是配置的自动故障转移,所以不能手工切换namenode的active和stadby角色。

-- 可以通过haadmin查看每个Service的角色状态:
[hadoop@funshion-hadoop194 lab]$ $HADOOP_HOME/bin/hdfs haadmin -getServiceState nn1
standby
[hadoop@funshion-hadoop194 lab]$ $HADOOP_HOME/bin/hdfs haadmin -getServiceState nn2
active
[hadoop@funshion-hadoop194 lab]$


-- 通过hdfs-site.xml中的如下配置,我们知道nn1是在 funshion-hadoop194上的namenode服务,nn2是funshion-hadoop195上的namenode服务

dfs.namenode.rpc-address.mycluster.nn1
funshion-hadoop194:8020


dfs.namenode.rpc-address.mycluster.nn2
funshion-hadoop195:8020


-- 所以,我们可以尝试 kill 掉 nn2(状态为active的namenode进程,然后去查看nn1的角色是否改变:
[hadoop@funshion-hadoop195 bin]$ jps
3199 JournalNode
3001 NameNode
1161 QuorumPeerMain
3364 DFSZKFailoverController
4367 Jps

[hadoop@funshion-hadoop195 bin]$ kill -9 3001
[hadoop@funshion-hadoop195 bin]$ jps
3199 JournalNode
1161 QuorumPeerMain
3364 DFSZKFailoverController
4381 Jps


[hadoop@funshion-hadoop195 bin]$ $HADOOP_HOME/bin/hdfs haadmin -getServiceState nn1
active
[hadoop@funshion-hadoop195 bin]$ $HADOOP_HOME/sbin/hadoop-daemon.sh start namenode
starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-funshion-hadoop195.out
[hadoop@funshion-hadoop195 bin]$ $HADOOP_HOME/bin/hdfs haadmin -getServiceState nn1
active
[hadoop@funshion-hadoop195 bin]$ $HADOOP_HOME/bin/hdfs haadmin -getServiceState nn2
standby


-- 甚至可以直接reboot状态为active的namenode节点(执行操作系统的重启动作,看另一个standby状态的namenode节点是否能正常转换成acitve状态
-- 甚至可以在有作业运行的时候去执行reboot操作系统(namenode的active节点执行)以测试双节点故障转移是否确实健壮


-- 集群相关网页:
-- http://funshion-hadoop194:50070/dfshealth.html#tab-overview

-- ##################################################################################


-- Step 6. 上传测试数据:


-- Step 6.1 安装wget包、创建相关目录及shell上传数据脚本:
[root@funshion-hadoop194 ~]# yum -y install wget
[hadoop@funshion-hadoop194 ~]$ 
[hadoop@funshion-hadoop194 ~]$ mkdir -p /home/hadoop/datateam/ghh/lab
[hadoop@funshion-hadoop194 ~]$ mkdir -p /home/hadoop/log_catch/down
[hadoop@funshion-hadoop194 ~]$ mkdir -p /home/hadoop/log_catch/put
[hadoop@funshion-hadoop194 ~]$ mkdir -p /home/hadoop/log_catch/zip
[hadoop@funshion-hadoop194 ~]$ vi /home/hadoop/datateam/ghh/lab/log_catch_hour_lzo.sh

#!/bin/bash


function f_show_info()
{
printf "%20s = %s\n" "$1" "$2"
return 0
}


function f_catch_all_day_log()
{
local str_date=""
local year=""
local month=""
local day=""


for(( str_date=${g_start_date};${str_date} do
year=$(date -d "${str_date}" +%Y )
month=$(date -d "${str_date}" +%m )
day=$(date -d "${str_date}" +%d )
f_catch_all_log ${year} ${month} ${day}
done
}


function f_catch_all_log()
{
local year="$1"
local month="$2"
local day="$3"
local hour=""
local date_hour=""
local date_dir=""
local hdfs_dir=""
local g_hdfs_dir=""
local hdfs_file=""
local url=""
local i=0
local nRet=0

for(( i=${g_start_hour};i do
hour=$(printf "%02d" "$i")
date_hour="${year}${month}${day}${hour}"
date_dir="${year}/${month}/${day}"
hdfs_dir="${year}/${month}/${day}/${hour}"
g_hdfs_dir="${g_hdfs_path}/${hdfs_dir}"
hdfs_file="${g_hdfs_path}/${hdfs_dir}/BeiJing_YiZhuang_CTC_${date_hour}.lzo"


url="${g_url}/${date_dir}/BeiJing_YiZhuang_CTC_${date_hour}.gz"
f_show_info "url" "${url}"
f_show_info "hdfs" "${hdfs_file}"
f_catch_log "${url}" "${hdfs_file}" "${g_hdfs_dir}"


hdfs_file="${g_hdfs_path}/${hdfs_dir}/BeiJing_ShangDi_CNC_${date_hour}.lzo"
url="${g_url}/${date_dir}/BeiJing_ShangDi_CNC_${date_hour}.gz"
f_show_info "url" "${url}"
f_show_info "hdfs" "${hdfs_file}"
f_catch_log "${url}" "${hdfs_file}" "${g_hdfs_dir}"
done
return $nRet
}


function f_catch_log()
{
local tmp_name=$( uuidgen | sed 's/-/_/g' )
local local_down_file="${g_local_down_path}/${tmp_name}"
local local_zip_file="${g_local_zip_path}/${tmp_name}"
local local_put_file="${g_local_put_path}/${tmp_name}"
local log_url="$1"
local hdfs_file="$2"
local nRet=0


if [[ 0 == $nRet ]];then
wget -O "${local_down_file}" "${log_url}"
nRet=$?
fi

if [[ 0 == $nRet ]];then
gzip -cd "${local_down_file}" | lzop -o "${local_zip_file}"
nRet=$?
fi


#       if [[ 0 == $nRet ]];then
#               gzip -cd "${local_down_file}" > "${local_zip_file}"
#               nRet=$?
#       fi


if [[ 0 == $nRet ]];then
mv "${local_zip_file}" "${local_put_file}"
hdfs dfs -mkdir -p "${g_hdfs_dir}"
hdfs dfs -put "${local_put_file}" "${hdfs_file}"
nRet=$?
fi


if [[ 0 == $nRet ]];then
hadoop jar /usr/local/hadoop/lib/native/hadoop-lzo-0.4.20-SNAPSHOT.jar com.hadoop.compression.lzo.LzoIndexer "${hdfs_file}"
nRet=$?
fi


rm -rf "${local_down_file}" "${local_put_file}" "${local_zip_file}"


return $nRet
}

# shell begins here

g_local_down_path="/home/hadoop/log_catch/down"
g_local_zip_path="/home/hadoop/log_catch/zip"
g_local_put_path="/home/hadoop/log_catch/put"

g_start_date=""
g_end_date=""
g_start_hour=0
g_end_hour=0
g_hdfs_path=""
g_url=""

nRet=0


if [[ 0 == $nRet ]];then
if [[ $# -ne 6 ]];then
f_show_info "cmd format" "sh ./log_catch.sh 'url' 'hdfs_path' 'start_date' 'end_date' 'start_hour' 'end_hour'"
nRet=1
else
g_url="$1"
g_hdfs_path="$2"
g_start_date="$3"
g_end_date="$4"
g_start_hour="$5"
g_end_hour="$6"
fi
fi


if [[ 0 == $nRet ]];then
f_catch_all_day_log
nRet=$?
fi


exit $nRet

-- Step 6.2 调用脚本上传数据:
[hadoop@funshion-hadoop194 ~]$ nohup sh /home/hadoop/datateam/ghh/lab/log_catch_hour_lzo.sh 'http://192.168.116.61:8081/website/pv/2' 'hdfs://mycluster/dw/logs/web/origin/pv/2' 20140524 20140525 0 23 &

-- nohup sh /home/hadoop/datateam/ghh/lab/log_catch_hour_lzo.sh 'http://192.168.116.61:8081/website/pv/2' 'hdfs://mycluster/dw/logs/web/origin/pv/2' 20140525 20140525 3 23 &

-- 上面这些脚本都是取公司的Oxeye的日志数据。(大家可以忽略此步操作)

-- Step 7. Hive安装(安装到196机器) (使用Hive与HBase整合安装;使用源码编译安装)

-- (其实应该先安装hbase,再安装hive可能顺序合理一点)


-- 参考:https://cwiki.apache.org/conflue ... iorto0.13onHadoop23
http://www.hadoopor.com/thread-5470-1-1.html
http://blog.csdn.net/hguisu/article/details/7282050
http://blog.csdn.net/hguisu/article/details/7282050
http://www.micmiu.com/bigdata/hive/hive-hbase-integration/

-- 源码下载编译操作如下:
mkdir -p /opt/software/hive_src
cd /opt/software/hive_src/
svn checkout http://svn.apache.org/repos/asf/hive/trunk/ hive_trunk
cd /opt/software/hive_src/hive_trunk


-- 下载以后,我们检查 hive_trunk目录下的pom.xml文件,发现hadoop-23.version这个变量已经引用了hadoop 2.4.0版本,所以,我们可以什么也不用修改,直接用ant去编译:
2.4.0


-- 或者如果发现版本不正确的话,我们可以这样指定参数执行(也可以修改pom.xml文件中对应正确的hadoop、hbase、zookeeper版本):
-- 最后我选用的版本相关参数如下:
2.4.0
0.98.3-hadoop1
0.98.3-hadoop2
3.4.6


-- 最后,开始编译:
cd /opt/software/hive_src/hive_trunk
mvn clean package -DskipTests -Phadoop-2,dist


[INFO] Hive .............................................. SUCCESS [  6.481 s]
[INFO] Hive Ant Utilities ................................ SUCCESS [  4.427 s]
[INFO] Hive Shims Common ................................. SUCCESS [  2.418 s]
[INFO] Hive Shims 0.20 ................................... SUCCESS [  1.284 s]
[INFO] Hive Shims Secure Common .......................... SUCCESS [  2.466 s]
[INFO] Hive Shims 0.20S .................................. SUCCESS [  0.961 s]
[INFO] Hive Shims 0.23 ................................... SUCCESS [  3.247 s]
[INFO] Hive Shims ........................................ SUCCESS [  0.364 s]
[INFO] Hive Common ....................................... SUCCESS [  5.259 s]
[INFO] Hive Serde ........................................ SUCCESS [  7.428 s]
[INFO] Hive Metastore .................................... SUCCESS [ 27.000 s]
[INFO] Hive Query Language ............................... SUCCESS [ 51.924 s]
[INFO] Hive Service ...................................... SUCCESS [  6.037 s]
[INFO] Hive JDBC ......................................... SUCCESS [ 14.293 s]
[INFO] Hive Beeline ...................................... SUCCESS [  1.406 s]
[INFO] Hive CLI .......................................... SUCCESS [ 10.297 s]
[INFO] Hive Contrib ...................................... SUCCESS [  1.418 s]
[INFO] Hive HBase Handler ................................ SUCCESS [ 33.679 s]
[INFO] Hive HCatalog ..................................... SUCCESS [  0.443 s]
[INFO] Hive HCatalog Core ................................ SUCCESS [  8.040 s]
[INFO] Hive HCatalog Pig Adapter ......................... SUCCESS [  1.795 s]
[INFO] Hive HCatalog Server Extensions ................... SUCCESS [  2.007 s]
[INFO] Hive HCatalog Webhcat Java Client ................. SUCCESS [  1.548 s]
[INFO] Hive HCatalog Webhcat ............................. SUCCESS [ 11.718 s]
[INFO] Hive HCatalog Streaming ........................... SUCCESS [  1.845 s]
[INFO] Hive HWI .......................................... SUCCESS [  1.246 s]
[INFO] Hive ODBC ......................................... SUCCESS [  0.626 s]
[INFO] Hive Shims Aggregator ............................. SUCCESS [  0.192 s]
[INFO] Hive TestUtils .................................... SUCCESS [  0.324 s]
[INFO] Hive Packaging .................................... SUCCESS [01:21 min]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 04:53 min
[INFO] Finished at: 2014-06-22T11:58:05+08:00
[INFO] Final Memory: 147M/1064M
[INFO] ------------------------------------------------------------------------

-- 最后,/opt/software/hive_src/hive_trunk/packaging/target 目录下的 apache-hive-0.14.0-SNAPSHOT-bin.tar.gz 文件,就是我们需要的安装包(这个版本还没有正式发布)

-- Step 7.1 My SQL安装(安装到194机器),并在My SQL中创建名为hive的数据库用以存放hive元数据:


-- 安装如下rpm包
rpm -ivh MySQL-client-5.6.17-1.linux_glibc2.5.x86_64.rpm
rpm -ivh MySQL-devel-5.6.17-1.linux_glibc2.5.x86_64.rpm
rpm -ivh MySQL-embedded-5.6.17-1.linux_glibc2.5.x86_64.rpm
rpm -e --nodeps mysql-libs-5.1.66-2.el6_3.x86_64
rpm -ivh MySQL-server-5.6.17-1.linux_glibc2.5.x86_64.rpm
rpm -ivh MySQL-shared-5.6.17-1.linux_glibc2.5.x86_64.rpm
rpm -ivh MySQL-shared-compat-5.6.17-1.linux_glibc2.5.x86_64.rpm
rpm -ivh MySQL-test-5.6.17-1.linux_glibc2.5.x86_64.rpm


A RANDOM PASSWORD HAS BEEN SET FOR THE MySQL root USER !
You will find that password in '/root/.mysql_secret'.

You must change that password on your first connect,
no other statement but 'SET PASSWORD' will be accepted.
See the manual for the semantics of the 'password expired' flag.

Also, the account for the anonymous user has been removed.

In addition, you can run:

 /usr/bin/mysql_secure_installation

which will also give you the option of removing the test database.
This is strongly recommended for production servers.


See the manual for more instructions.
Please report any problems at http://bugs.mysql.com/
The latest information about MySQL is available on the web at
 http://www.mysql.com
Support MySQL by buying support/licenses at http://shop.mysql.com
New default config file was created as /usr/my.cnf and
will be used by default by the server when you start it.
You may edit this file to change server settings
-- 查看安装生成的root用户随机密码:
[root@funshion-hadoop194 ~]# more /root/.mysql_secret
# The random password set for the root user at Mon Jun  9 18:18:48 2014 (local time): QVkyOjwSlAEiPaeT


-- 登录My SQL数据库并修改root密码,并创建名为hive的数据库与用户:
[root@funshion-hadoop194 ~]# service mysql start
Starting MySQL... SUCCESS! 


-- 设置mysql服务自启动
chkconfig mysql on


[root@funshion-hadoop194 ~]# mysql -uroot -pQVkyOjwSlAEiPaeT
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.6.17

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SET PASSWORD = PASSWORD('bee56915');
Query OK, 0 rows affected (0.00 sec)


mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)


mysql> CREATE DATABASE `hive` /*!40100 DEFAULT CHARACTER SET utf8 */;
Query OK, 1 row affected (0.00 sec)


mysql> CREATE USER 'hive'@'funshion-hadoop196' IDENTIFIED BY password('bee56915');
Query OK, 0 rows affected (0.00 sec)


GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'%' Identified by 'bee56915'; 
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'localhost' Identified by 'bee56915'; 
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'127.0.0.1' Identified by 'bee56915';  
GRANT ALL PRIVILEGES ON hive.* TO 'hive'@'funshion-hadoop196' Identified by 'bee56915'; 

-- Step 7.2 解决hive安装包到/usr/local下,添加hive相关环境变量:

[root@funshion-hadoop194 ~]# cd /opt/software
[root@funshion-hadoop194 software]# ls -l|grep hive
-rw-r--r--.  1 root root  65662469 May 15 14:04 hive-0.12.0-bin.tar.gz
[root@funshion-hadoop194 software]# tar -xvf ./hive-0.12.0-bin.tar.gz
[root@funshion-hadoop194 software]#  mv hive-0.12.0-bin /usr/local
[root@funshion-hadoop194 software]# cd /usr/local
[root@funshion-hadoop194 local]# chown -R hadoop.hadoop ./hive-0.12.0-bin
[root@funshion-hadoop194 local]# ln -s hive-0.12.0-bin hive

[hadoop@funshion-hadoop194 local]$ vi ~/.bash_profile
export HIVE_HOME=/usr/local/hive
export PATH=$PATH:$HIVE_HOME/bin

[hadoop@funshion-hadoop194 local]$ source ~/.bash_profile

-- Step 7.3 在My SQL数据库的hive数据库中执行创建hive元数据脚本:


[hadoop@funshion-hadoop194 mysql]$ mysql -uroot -pbee56915
Warning: Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3
Server version: 5.6.17 MySQL Community Server (GPL)


Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.


Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use hive;
Database changed
mysql> source /usr/local/hive/scripts/metastore/upgrade/mysql/hive-schema-0.14.0.mysql.sql

mysql> show tables;
+---------------------------+
| Tables_in_hive            |
+---------------------------+
| BUCKETING_COLS            |
| CDS                       |
| COLUMNS_V2                |
| DATABASE_PARAMS           |
| DBS                       |
| DB_PRIVS                  |
| DELEGATION_TOKENS         |
| GLOBAL_PRIVS              |
| IDXS                      |
| INDEX_PARAMS              |
| MASTER_KEYS               |
| NUCLEUS_TABLES            |
| PARTITIONS                |
| PARTITION_EVENTS          |
| PARTITION_KEYS            |
| PARTITION_KEY_VALS        |
| PARTITION_PARAMS          |
| PART_COL_PRIVS            |
| PART_COL_STATS            |
| PART_PRIVS                |
| ROLES                     |
| ROLE_MAP                  |
| SDS                       |
| SD_PARAMS                 |
| SEQUENCE_TABLE            |
| SERDES                    |
| SERDE_PARAMS              |
| SKEWED_COL_NAMES          |
| SKEWED_COL_VALUE_LOC_MAP  |
| SKEWED_STRING_LIST        |
| SKEWED_STRING_LIST_VALUES |
| SKEWED_VALUES             |
| SORT_COLS                 |
| TABLE_PARAMS              |
| TAB_COL_STATS             |
| TBLS                      |
| TBL_COL_PRIVS             |
| TBL_PRIVS                 |
| TYPES                     |
| TYPE_FIELDS               |
| VERSION                   |
+---------------------------+
41 rows in set (0.00 sec)

mysql> grant all privileges on hive.* to 'hive'@'funshion-hadoop196';
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye
[hadoop@funshion-hadoop194 mysql]$ 

-- Step 7.4 修改hive相关配置文件:

[hadoop@funshion-hadoop194 mysql]$ cd $HIVE_HOME/conf
[hadoop@funshion-hadoop194 conf]$ ls -l
total 92
-rw-rw-r--. 1 hadoop hadoop 81186 Oct 10  2013 hive-default.xml.template
-rw-rw-r--. 1 hadoop hadoop  2378 Oct 10  2013 hive-env.sh.template
-rw-rw-r--. 1 hadoop hadoop  2465 Oct 10  2013 hive-exec-log4j.properties.template
-rw-rw-r--. 1 hadoop hadoop  2870 Oct 10  2013 hive-log4j.properties.template

[hadoop@funshion-hadoop194 conf]$ mv hive-env.sh.template hive-env.sh
[hadoop@funshion-hadoop194 conf]$ mv hive-default.xml.template hive-site.xml


-- 7.4.1 修改 $HIVE_HOME/bin/hive-config.sh 文件,添加如下环境变量:

[hadoop@funshion-hadoop194 conf]$ vi $HIVE_HOME/bin/hive-config.sh

export JAVA_HOME=/usr/java/latest
export HIVE_HOME=/usr/local/hive
export HADOOP_HOME=/usr/local/hadoop

-- 7.4.2 修改 $HIVE_HOME/conf/hive-site.xml 的第2002行:

4.4.报错—请修改hive-site.xml:(vi编辑下: /auth)

-- 原值:
auth

-- 修改为:
auth

-- 7.4.3 修改 $HIVE_HOME/conf/hive-site.xml 的如下property:



-- 7.4.3.1
-- 原值:

 javax.jdo.option.ConnectionURL
 jdbc:derby:;databaseName=metastore_db;create=true
 JDBC connect string for a JDBC metastore


-- 修改为:

 javax.jdo.option.ConnectionURL
 jdbc:mysql://funshion-hadoop194:3306/hive?createDatabaseIfNotExist=true
 JDBC connect string for a JDBC metastore


-- 7.4.3.2
-- 原值:

 javax.jdo.option.ConnectionDriverName
 org.apache.derby.jdbc.EmbeddedDriver
 Driver class name for a JDBC metastore


-- 修改为:

 javax.jdo.option.ConnectionDriverName
 com.mysql.jdbc.Driver
 Driver class name for a JDBC metastore


-- 7.4.3.3
-- 原值:

javax.jdo.option.ConnectionUserName
APP
username to use against metastore database


-- 修改为:

javax.jdo.option.ConnectionUserName
hive
username to use against metastore database


-- 7.4.3.4
-- 原值:

javax.jdo.option.ConnectionPassword
mine
password to use against metastore database


-- 修改为

javax.jdo.option.ConnectionPassword
bee56915
password to use against metastore database


-- 7.4.3.5
-- 原值:

hive.metastore.warehouse.dir
/user/hive/warehouse
location of default database for the warehouse


-- 修改为:

hive.metastore.warehouse.dir
hdfs://mycluster:8020/user/hive/warehouse
location of default database for the warehouse


-- 7.4.3.6
-- 原值:

hive.exec.scratchdir
/tmp/hive-${user.name}
Scratch space for Hive jobs


-- 修改为:

hive.exec.scratchdir
hdfs://mycluster:8020/tmp/hive-${user.name}
Scratch space for Hive jobs


-- 添加:

hbase.zookeeper.quorum
funshion-hadoop194,funshion-hadoop195,funshion-hadoop196,funshion-hadoop197,funshion-hadoop198



hive.aux.jars.path
 
file:///usr/local/hive/lib/hive-ant-0.14.0-SNAPSHOT.jar,
file:///usr/local/hive/lib/protobuf-java-2.5.0.jar,
file:///usr/local/hbase/lib/hbase-server-0.98.3-hadoop2.jar,
file:///usr/local/hbase/lib/hbase-client-0.98.3-hadoop2.jar,
file:///usr/local/hbase/lib/hbase-common-0.98.3-hadoop2.jar,
file:///usr/local/hbase/lib/hbase-common-0.98.3-hadoop2-tests.jar,
file:///usr/local/hbase/lib/hbase-protocol-0.98.3-hadoop2.jar,
file:///usr/local/hbase/lib/htrace-core-2.04.jar,
file:///usr/local/hive/lib/zookeeper-3.4.6.jar,
file:///usr/local/hive/lib/guava-11.0.2.jar




-- 上面格式是方便查看,真正使用下面的格式:将所有的jar包放到一行:

hive.aux.jars.path
file:///usr/local/hive/lib/hive-ant-0.14.0-SNAPSHOT.jar,file:///usr/local/hbase/lib/hbase-server-0.98.3-hadoop2.jar,file:///usr/local/hbase/lib/hbase-client-0.98.3-hadoop2.jar,file:///usr/local/hbase/lib/hbase-common-0.98.3-hadoop2.jar,file:///usr/local/hbase/lib/hbase-common-0.98.3-hadoop2-tests.jar,file:///usr/local/hbase/lib/hbase-protocol-0.98.3-hadoop2.jar,file:///usr/local/hbase/lib/htrace-core-2.04.jar,file:///usr/local/hive/lib/zookeeper-3.4.6.jar



-- 首先需要把hive/lib下的hbase包替换成安装的hbase的,需要如下几下:
hbase-client-0.98.2-hadoop2.jar
hbase-common-0.98.2-hadoop2.jar
hbase-common-0.98.2-hadoop2-tests.jar
hbase-protocol-0.98.2-hadoop2.jar
htrace-core-2.04.jar
hbase-server-0.98.2-hadoop2.jar


将hadoop节点添加到hive-site.xml中

hbase.zookeeper.quorum
所有节点



-- 另外,你必须在创建Hive库表前,在HDFS上创建/tmp和/user/hive/warehouse(也称为hive.metastore.warehouse.dir所指定的目录),并且将它们的权限设置为chmod g+w。完成这个操作的命令如下:
$ $HADOOP_HOME/bin/hadoop fs -mkdir /tmp
$ $HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$ $HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse

-- Step 7.5 启动并登录hive,并创建hive表


14/06/16 18:58:50 WARN conf.HiveConf: DEPRECATED: hive.metastore.ds.retry.* no longer has any effect.  Use hive.hmshandler.retry.* instead


-- 集群启动:
# bin/hive --service hiveserver -hiveconf hbase.zookeeper.quorum=funshion-hadoop194,funshion-hadoop195,funshion-hadoop196,funshion-hadoop197,funshion-hadoop198 &
# bin/hive -hiveconf hbase.zookeeper.quorum=funshion-hadoop194,funshion-hadoop195,funshion-hadoop196,funshion-hadoop197,funshion-hadoop198 &
# bin/hive -hiveconf hive.root.logger=DEBUG,console hbase.master=funshion-hadoop194:60010


# bin/hive -hiveconf hbase.master=funshion-hadoop194:60010 --auxpath /usr/local/hive/lib/hive-ant-0.13.1.jar,/usr/local/hive/lib/protobuf-java-2.5.0.jar,/usr/local/hive/lib/hbase-client-0.98.3-hadoop2.jar, \
/usr/local/hive/lib/hbase-common-0.98.3-hadoop2.jar,/usr/local/hive/lib/zookeeper-3.4.6.jar,/usr/local/hive/lib/guava-11.0.2.jar


#bin/hive -hiveconf hbase.zookeeper.quorum=node1,node2,node3


-- 客户端登录:
$HIVE_HOME/bin/hive -h127.0.0.1 -p10000
$HIVE_HOME/bin/hive -hfunshion-hadoop194 -p10000
$HIVE_HOME/bin/hive -p10000


[hadoop@funshion-hadoop194 lib]$ hive --service hiveserver & 


[hadoop@funshion-hadoop194 lib]$ hive
14/06/10 16:56:59 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/06/10 16:56:59 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/06/10 16:56:59 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/06/10 16:56:59 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/06/10 16:56:59 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/06/10 16:56:59 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/06/10 16:56:59 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative


Logging initialized using configuration in jar:file:/usr/local/hive-0.12.0-bin/lib/hive-common-0.12.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive-0.12.0-bin/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
hive> show databases;
OK
Failed with exception java.io.IOException:java.io.IOException: Cannot create an instance of InputFormat class org.apache.hadoop.mapred.TextInputFormat as specified in mapredWork!


-- 如果报类似如上错误,在 ~/.bash_profile 添加环境变量,如下:

export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native/Linux-amd64-64
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_HOME/lib/native/hadoop-lzo-0.4.20-SNAPSHOT.jar


-- hive客户端登录:
[hadoop@funshion-hadoop194 bin]$  $HIVE_HOME/bin/hive -h127.0.0.1 -p10000
14/06/10 17:13:17 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/06/10 17:13:17 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/06/10 17:13:17 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/06/10 17:13:17 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/06/10 17:13:17 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/06/10 17:13:17 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/06/10 17:13:17 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative


Logging initialized using configuration in jar:file:/usr/local/hive-0.12.0-bin/lib/hive-common-0.12.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hive-0.12.0-bin/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
[127.0.0.1:10000] hive> create database web;
OK
[127.0.0.1:10000] hive> CREATE EXTERNAL TABLE pv2(
                      >   protocol string, 
                      >   rprotocol string, 
                      >   time int, 
                      >   ip string, 
                      >   fck string, 
                      >   mac string, 
                      >   userid string, 
                      >   fpc string, 
                      >   version string, 
                      >   sid string, 
                      >   pvid string, 
                      >   config string, 
                      >   url string, 
                      >   referurl string, 
                      >   channelid string, 
                      >   vtime string, 
                      >   ext string, 
                      >   useragent string, 
                      >   step string, 
                      >   sestep string, 
                      >   seidcount string, 
                      >   ta string)
                      > PARTITIONED BY ( 
                      >   year string, 
                      >   month string, 
本網站聲明
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn

熱AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool

Undress AI Tool

免費脫衣圖片

Clothoff.io

Clothoff.io

AI脫衣器

Video Face Swap

Video Face Swap

使用我們完全免費的人工智慧換臉工具,輕鬆在任何影片中換臉!

熱門文章

<🎜>:泡泡膠模擬器無窮大 - 如何獲取和使用皇家鑰匙
3 週前 By 尊渡假赌尊渡假赌尊渡假赌
北端:融合系統,解釋
3 週前 By 尊渡假赌尊渡假赌尊渡假赌

熱工具

記事本++7.3.1

記事本++7.3.1

好用且免費的程式碼編輯器

SublimeText3漢化版

SublimeText3漢化版

中文版,非常好用

禪工作室 13.0.1

禪工作室 13.0.1

強大的PHP整合開發環境

Dreamweaver CS6

Dreamweaver CS6

視覺化網頁開發工具

SublimeText3 Mac版

SublimeText3 Mac版

神級程式碼編輯軟體(SublimeText3)

熱門話題

Java教學
1664
14
CakePHP 教程
1423
52
Laravel 教程
1319
25
PHP教程
1269
29
C# 教程
1248
24
聚類演算法中的聚類效果評估問題 聚類演算法中的聚類效果評估問題 Oct 10, 2023 pm 01:12 PM

聚類演算法中的聚類效果評估問題,需要具體程式碼範例聚類是一種無監督學習方法,透過對資料進行聚類,將相似的樣本歸為一類。在聚類演算法中,如何評估聚類的效果是一個重要的問題。本文將介紹幾種常用的聚類效果評估指標,並給出對應的程式碼範例。一、聚類效果評估指標輪廓係數(SilhouetteCoefficient)輪廓係數是透過計算樣本的緊密度和與其他簇的分離度來評估聚類效

教你如何診斷常見問題的iPhone故障 教你如何診斷常見問題的iPhone故障 Dec 03, 2023 am 08:15 AM

iPhone以其強大的性能和多方面的功能而聞名,它不能倖免於偶爾的打嗝或技術困難,這是複雜電子設備的共同特徵。遇到iPhone問題可能會讓人感到沮喪,但通常不需要警報。在這份綜合指南中,我們旨在揭開與iPhone使用相關的一些最常遇到的挑戰的神秘面紗。我們的逐步方法旨在幫助您解決這些常見問題,提供實用的解決方案和故障排除技巧,讓您的裝置恢復到最佳工作狀態。無論您是面對一個小故障還是更複雜的問題,本文都可以幫助您有效地解決這些問題。一般故障排除提示在深入研究具體的故障排除步驟之前,以下是一些有助於

解決jQuery無法取得表單元素值的方法 解決jQuery無法取得表單元素值的方法 Feb 19, 2024 pm 02:01 PM

解決jQuery.val()無法使用的問題,需要具體程式碼範例對於前端開發者,使用jQuery是常見的操作之一。其中,使用.val()方法來取得或設定表單元素的值是非常常見的操作。然而,在一些特定的情況下,可能會出現無法使用.val()方法的問題。本文將介紹一些常見的情況以及解決方案,並提供具體的程式碼範例。問題描述在使用jQuery開發前端頁面時,有時候會碰

霧鎖王國能野地搭建築嗎 霧鎖王國能野地搭建築嗎 Mar 07, 2024 pm 08:28 PM

玩家在霧鎖王國中進行遊戲時可以收集不同的材料用來建造建築,有很多玩家想知道野地搭建築嗎,霧鎖王國能野地是不能搭建築的,必須要在祭壇的範圍內才可以搭建。霧鎖王國能野地搭建築嗎答:不能。 1.霧鎖王國能野地是不能搭建築的。 2、建築必須要在祭壇的範圍內才可以搭建。 3、玩家可以自行放置靈火祭壇,但一旦離開了範圍,將無法進行建築搭建。 4.我們也可以直接在山上挖個洞當做我們的家,這樣不用耗建築材料。 5.玩家自己搭建的建築中,有舒適度機制,也就是說,內裝越好,舒適度越高。 6.高舒適度將為玩家帶來屬性加成,例如

機器學習模型的泛化能力問題 機器學習模型的泛化能力問題 Oct 08, 2023 am 10:46 AM

機器學習模型的泛化能力問題,需要具體程式碼範例隨著機器學習的發展和應用越來越廣泛,人們越來越關注機器學習模型的泛化能力問題。泛化能力指的是機器學習模型對未標記資料的預測能力,也可以理解為模型在真實世界中的適應能力。一個好的機器學習模型應該具有較高的泛化能力,能夠對新的數據做出準確的預測。然而,在實際應用中,我們經常會遇到模型在訓練集上表現良好,但在測試集或真實

影像壓縮的失真控制問題 影像壓縮的失真控制問題 Oct 08, 2023 pm 07:17 PM

影像壓縮是在儲存和傳輸影像時常用的技術手段,它可以減少影像的儲存空間,並加快影像的傳輸速度。影像壓縮的目標是盡可能地減少影像檔案的大小,同時盡量保持影像的視覺質量,以便人眼能夠接受。然而,在影像壓縮過程中,常常會產生一定程度的失真。本文將討論影像壓縮中的失真控制問題,並提供一些具體的程式碼範例。 JPEG壓縮演算法及其失真問題JPEG是一種常見的影像壓縮標準,它採用

如龍8酒類大師考試問題有哪些 如龍8酒類大師考試問題有哪些 Feb 02, 2024 am 10:18 AM

如龍8酒類大師考試所涉及的問題包括哪些?對應的答案是什麼?如何快速通過考試?酒類大師考試活動有許多需要回答的問題,我們可以參考答案來解決。這些問題都牽涉到酒的知識。如果需要參考,讓我們一起來看看如龍8酒類大師考試問題答案的詳細解析!如龍8酒類大師考試問題答案詳解1、關於「酒」的問題。這是一種管由王室建立的蒸餾灑廠生產的蒸餾酒,以夏威夷大量種植的甘盤的糖分為原料釀製。請問這種酒叫什麼?答:蘭姆酒2、關於「酒」的問題。圖片上是一種使用乾琴灑和乾苦艾酒調配而成的酒。它的特點是加入了橄欖,被譽為「雞尼酒

在PyCharm中快速安裝PyTorch:簡易指南 在PyCharm中快速安裝PyTorch:簡易指南 Feb 24, 2024 pm 09:54 PM

PyTorch安裝指南:在PyCharm中快速搭建開發環境PyTorch是當前深度學習領域中備受歡迎的框架之一,具有易用性和靈活性的特點,深受開發者青睞。本文將為大家介紹如何在PyCharm中快速建置PyTorch的開發環境,方便大家開始深度學習專案的開發。步驟一:安裝PyTorch首先,我們需要安裝PyTorch。 PyTorch的安裝通常需要考慮到系統環境

See all articles