Home > Database > Mysql Tutorial > body text

Alex的Hadoop菜鸟教程:第8课Sqoop1导入Hbase以及Hive

WBOY
Release: 2016-06-07 16:11:05
Original
1912 people have browsed it

继续写,其实mysql 导入导出 hdfs 对于实际项目开发没啥用的,但是那个可以拿来入门。今天写跟Hbase和Hive的协作。我突然发现我的教程写的顺序很凌乱啊,没有先介绍Hive 的安装,这点向大家道歉,我后面补上。 数据准备 mysql 在mysql 里面建立表 employee

继续写,其实mysql 导入导出 hdfs 对于实际项目开发没啥用的,但是那个可以拿来入门。今天写跟Hbase和Hive的协作。我突然发现我的教程写的顺序很凌乱啊,没有先介绍Hive 的安装,这点向大家道歉,我后面补上。

数据准备

mysql

在mysql 里面建立表 employee 并插入数据
CREATE TABLE `employee` (    
  `id` int(11) NOT NULL,    
  `name` varchar(20) NOT NULL,    
  PRIMARY KEY (`id`)    
) ENGINE=MyISAM  DEFAULT CHARSET=utf8;  
Copy after login

insert into employee (id,name) values (1,'michael');  
insert into employee (id,name) values (2,'ted'); 
insert into employee (id,name) values (3,'jack'); 
Copy after login

Hbase

hbase(main):006:0> create 'employee','info'
0 row(s) in 0.4440 seconds

=> Hbase::Table - employee
Copy after login

Hive

不需要数据准备,等等用--create-hive-table会自动建表

从mysql导入到Hbase

# sqoop import --connect jdbc:mysql://localhost:3306/sqoop_test --username root --password root --table employee --hbase-table employee --column-family info --hbase-row-key id -m 1
Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/12/01 17:36:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1
14/12/01 17:36:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/12/01 17:36:25 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
14/12/01 17:36:25 INFO tool.CodeGenTool: Beginning code generation
14/12/01 17:36:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
14/12/01 17:36:26 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
14/12/01 17:36:26 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
……中间日志太多了,用省略号代替
14/12/01 17:37:12 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 37.3924 seconds (0 bytes/sec)
14/12/01 17:37:12 INFO mapreduce.ImportJobBase: Retrieved 3 records.
Copy after login


去检查下hbase
hbase(main):001:0> scan 'employee'
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
ROW                                      COLUMN+CELL                                                                                                           
 1                                       column=info:name, timestamp=1417426628685, value=michael                                                              
 2                                       column=info:name, timestamp=1417426628685, value=ted                                                                  
 3                                       column=info:name, timestamp=1417426628685, value=jack                                                                 
3 row(s) in 0.1630 seconds
Copy after login

成功插入3条数据

从mysql导入hive

# sqoop import --connect jdbc:mysql://localhost:3306/sqoop_test --username root --password root --table employee --hive-import --hive-table hive_employee --create-hive-table
Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
……………………
14/12/02 15:12:13 INFO hive.HiveImport: Loading data to table default.hive_employee
14/12/02 15:12:14 INFO hive.HiveImport: Table default.hive_employee stats: [num_partitions: 0, num_files: 4, num_rows: 0, total_size: 23, raw_data_size: 0]
14/12/02 15:12:14 INFO hive.HiveImport: OK
14/12/02 15:12:14 INFO hive.HiveImport: Time taken: 0.799 seconds
14/12/02 15:12:14 INFO hive.HiveImport: Hive import complete.
14/12/02 15:12:14 INFO hive.HiveImport: Export directory is empty, removing it.
Copy after login

这里说下真实环境中mysql的jdbc链接不要用localhost,因为这个任务会被分布式的发送不同的hadoop机子上,要那些机子真的可以通过jdbc连到mysql上才行,否则会丢数据
检查下hive
hive> select * from hive_employee;
OK
1	michael
2	ted
3	jack
Time taken: 0.179 seconds, Fetched: 3 row(s)
Copy after login

还有一点要声明下:目前sqoop只能从mysql导入数据到hive的原生表(也就是基于hdfs存储的),无法导入数据到外部表(比如基于hbase建立的hive表)
下课!下次讲导出!
Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template