> 데이터 베이스 > MySQL 튜토리얼 > Hbase+Hadoop安装部署

Hbase+Hadoop安装部署

WBOY
풀어 주다: 2016-06-07 16:30:48
원래의
999명이 탐색했습니다.

VMware安装多个RedHat Linux操作系统,摘抄了不少网上的资料,基本上按照顺序都能安装好 ? 1、建用户 groupadd bigdata useradd -g bigdata hadoop passwd hadoop ? 2、建JDK vi /etc/profile ? export JAVA_HOME=/usr/lib/java-1.7.0_07 export CLASSPATH=.

VMware安装多个RedHat Linux操作系统,摘抄了不少网上的资料,基本上按照顺序都能安装好

?

1、建用户

groupadd bigdata

useradd -g bigdata hadoop

passwd hadoop

?

2、建JDK

vi /etc/profile

?

export JAVA_HOME=/usr/lib/java-1.7.0_07

export CLASSPATH=.

export HADOOP_HOME=/home/hadoop/hadoop

export HBASE_HOME=/home/hadoop/hbase?

export HADOOP_MAPARED_HOME=${HADOOP_HOME}

export HADOOP_COMMON_HOME=${HADOOP_HOME}

export HADOOP_HDFS_HOME=${HADOOP_HOME}

export YARN_HOME=${HADOOP_HOME}

export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop

export HBASE_CONF_DIR=${HBASE_HOME}/conf

export ZK_HOME=/home/hadoop/zookeeper

export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$HADOOP_HOME/sbin:$ZK_HOME/bin:$PATH

?

?

?

?

?

source /etc/profile

chmod 777 -R /usr/lib/java-1.7.0_07

?

?

3、修改hosts

vi /etc/hosts

加入

172.16.254.215 ? master

172.16.254.216 ? salve1

172.16.254.217 ? salve2

172.16.254.218 ? salve3

?

3、免ssh密码

215服务器

su -root

vi /etc/ssh/sshd_config

确保含有如下内容

RSAAuthentication yes

PubkeyAuthentication yes

AuthorizedKeysFile ? ? ?.ssh/authorized_keys

重启sshd

service sshd restart

?

su - hadoop

ssh-keygen -t rsa

cd /home/hadoop/.ssh

cat id_rsa.pub >> authorized_keys

chmod 600 authorized_keys

?

在217 ?218 ?216 分别执行?

mkdir /home/hadoop/.ssh

chmod 700 /home/hadoop/.ssh

?

在215上执行

scp id_rsa.pub hadoop@salve1:/home/hadoop/.ssh/

scp id_rsa.pub hadoop@salve2:/home/hadoop/.ssh/

scp id_rsa.pub hadoop@salve3:/home/hadoop/.ssh/

?

在217 ?218 ?216 分别执行?

cat /home/hadoop/.ssh/id_rsa.pub >> /home/hadoop/.ssh/authorized_keys?

chmod 600 /home/hadoop/.ssh//authorized_keys

?

?

4、建hadoop与hbase、zookeeper

su - hadoop

mkdir /home/hadoop/hadoop

mkdir /home/hadoop/hbase

mkdir /home/hadoop/zookeeper

?

cp -r /home/hadoop/soft/hadoop-2.0.1-alpha/* /home/hadoop/hadoop/

cp -r /home/hadoop/soft/hbase-0.95.0-hadoop2/* /home/hadoop/hbase/

cp -r /home/hadoop/soft/zookeeper-3.4.5/* /home/hadoop/zookeeper/

?

?

1) hadoop 配置

?

vi /home/hadoop/hadoop/etc/hadoop/hadoop-env.sh?

修改?

export JAVA_HOME=/usr/lib/java-1.7.0_07

export HBASE_MANAGES_ZK=true

?

?

vi /home/hadoop/hadoop/etc/hadoop/core-site.xml

加入

hadoop.tmp.dir

/home/hadoop/hadoop/tmp

A base for other temporary directories.

fs.default.name

hdfs://172.16.254.215:9000

hadoop.proxyuser.root.hosts

172.16.254.215

hadoop.proxyuser.root.groups

*

?

vi /home/hadoop/hadoop/etc/hadoop/slaves ?

加入(不用master做salve)

salve1

salve2

salve3

?

vi /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml

加入

dfs.replication

3

?

dfs.namenode.name.dir

file:/home/hadoop/hdfs/name

true

?

dfs.federation.nameservice.id

ns1

?

dfs.namenode.backup.address.ns1

172.16.254.215:50100

?

dfs.namenode.backup.http-address.ns1

172.16.254.215:50105

?

dfs.federation.nameservices

ns1

?

dfs.namenode.rpc-address.ns1

172.16.254.215:9000

dfs.namenode.rpc-address.ns2

172.16.254.215:9000

?

dfs.namenode.http-address.ns1

172.16.254.215:23001

?

dfs.namenode.http-address.ns2

172.16.254.215:13001

?

dfs.dataname.data.dir

file:/home/hadoop/hdfs/data

true

?

dfs.namenode.secondary.http-address.ns1

172.16.254.215:23002

?

dfs.namenode.secondary.http-address.ns2

172.16.254.215:23002

?

dfs.namenode.secondary.http-address.ns1

172.16.254.215:23003

?

dfs.namenode.secondary.http-address.ns2

172.16.254.215:23003

?

?

vi /home/hadoop/hadoop/etc/hadoop/yarn-site.xml

加入

yarn.resourcemanager.address

172.16.254.215:18040

?

yarn.resourcemanager.scheduler.address

172.16.254.215:18030

?

yarn.resourcemanager.webapp.address

172.16.254.215:18088

?

yarn.resourcemanager.resource-tracker.address

172.16.254.215:18025

?

yarn.resourcemanager.admin.address

172.16.254.215:18141

?

yarn.nodemanager.aux-services

mapreduce.shuffle

?

2) hbase配置

?

vi /home/hadoop/hbase/conf/hbase-site.xml

加入

?

dfs.support.append?

true?

?

?

hbase.rootdir?

hdfs://172.16.254.215:9000/hbase?

?

?

hbase.cluster.distributed?

true?

?

?

hbase.config.read.zookeeper.config?

true

?

hbase.master?

master?

?

?

hbase.zookeeper.quorum?

salve1,salve2,salve3?

?

zookeeper.session.timeout

60000

hbase.zookeeper.property.clientPort

2181

hbase.tmp.dir

/home/hadoop/hbase/tmp

Temporary directory on the local filesystem.

hbase.client.keyvalue.maxsize

10485760

?

vi /home/hadoop/hbase/conf/regionservers

加入

salve1

salve2

salve3

?

vi /home/hadoop/hbase/conf/hbase-env.sh

修改

export JAVA_HOME=/usr/lib/java-1.7.0_07

export HBASE_MANAGES_ZK=false

?

?

?

3) zookeeper配置

?

vi /home/hadoop/zookeeper/conf/zoo.cfg

加入

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/home/hadoop/zookeeper/data

clientPort=2181

server.1=salve1:2888:3888

server.2=salve2:2888:3888

server.3=salve3:2888:3888

?

将/home/hadoop/zookeeper/conf/zoo.cfg拷贝到/home/hadoop/hbase/

?

?

4) 同步master和salve

scp -r /home/hadoop/hadoop ?hadoop@salve1:/home/hadoop ?

scp -r /home/hadoop/hbase ?hadoop@salve1:/home/hadoop ?

scp -r /home/hadoop/zookeeper ?hadoop@salve1:/home/hadoop

?

scp -r /home/hadoop/hadoop ?hadoop@salve2:/home/hadoop ?

scp -r /home/hadoop/hbase ?hadoop@salve2:/home/hadoop ?

scp -r /home/hadoop/zookeeper ?hadoop@salve2:/home/hadoop

?

scp -r /home/hadoop/hadoop ?hadoop@salve3:/home/hadoop ?

scp -r /home/hadoop/hbase ?hadoop@salve3:/home/hadoop ?

scp -r /home/hadoop/zookeeper ?hadoop@salve3:/home/hadoop

?

设置 salve1 salve2 salve3 的zookeeper

?

echo "1" > /home/hadoop/zookeeper/data/myid

echo "2" > /home/hadoop/zookeeper/data/myid

echo "3" > /home/hadoop/zookeeper/data/myid

?

?

?

5)测试

测试hadoop

hadoop namenode -format -clusterid clustername

?

start-all.sh

hadoop fs -ls hdfs://172.16.254.215:9000/?

hadoop fs -mkdir hdfs://172.16.254.215:9000/hbase?

//hadoop fs -copyFromLocal ./install.log hdfs://172.16.254.215:9000/testfolder?

//hadoop fs -ls hdfs://172.16.254.215:9000/testfolder

//hadoop fs -put /usr/hadoop/hadoop-2.0.1-alpha/*.txt hdfs://172.16.254.215:9000/testfolder

//cd /usr/hadoop/hadoop-2.0.1-alpha/share/hadoop/mapreduce

//hadoop jar hadoop-mapreduce-examples-2.0.1-alpha.jar wordcount hdfs://172.16.254.215:9000/testfolder hdfs://172.16.254.215:9000/output

//hadoop fs -ls hdfs://172.16.254.215:9000/output

//hadoop fs -cat ?hdfs://172.16.254.215:9000/output/part-r-00000

?

启动 salve1 salve2 salve3 的zookeeper

zkServer.sh start

?

启动 start-hbase.sh

进入 hbase shell

测试 hbase?

list

create 'student','name','address' ?

put 'student','1','name','tom'

get 'student','1'

?



已有 0 人发表留言,猛击->> 这里

ITeye推荐
  • —软件人才免语言低担保 赴美带薪读研!—



Hbase+Hadoop安装部署

관련 라벨:
원천:php.cn
본 웹사이트의 성명
본 글의 내용은 네티즌들의 자발적인 기여로 작성되었으며, 저작권은 원저작자에게 있습니다. 본 사이트는 이에 상응하는 법적 책임을 지지 않습니다. 표절이나 침해가 의심되는 콘텐츠를 발견한 경우 admin@php.cn으로 문의하세요.
인기 튜토리얼
더>
최신 다운로드
더>
웹 효과
웹사이트 소스 코드
웹사이트 자료
프론트엔드 템플릿