Alex的Hadoop菜鸟教程:第8课Sqoop1安装/导入/导出教程
靠!sqoop2的文档太少了,而且居然不支持Hbase,十分简陋,所以我愤而放弃Sqoop2转为使用Sqoop1,之前跟着我教程看到朋友不要拿砖砸我,我是也是不知情的群众 卸载sqoop2 这步可选,如果你们是照着我之前的教程你们已经装了sqoop2就得先卸载掉,没装的可以跳
靠!sqoop2的文档太少了,而且居然不支持Hbase,十分简陋,所以我愤而放弃Sqoop2转为使用Sqoop1,之前跟着我教程看到朋友不要拿砖砸我,我是也是不知情的群众
卸载sqoop2
这步可选,如果你们是照着我之前的教程你们已经装了sqoop2就得先卸载掉,没装的可以跳过这步
$ sudo su - $ service sqoop2-server stop $ yum -y remove sqoop2-server $ yum -y remove sqoop2-client
安装Sqoop1
yum install -y sqoop
用help测试下是否有安装好
# sqoop help Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 14/11/28 11:33:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1 usage: sqoop COMMAND [ARGS] Available commands: codegen Generate code to interact with database records create-hive-table Import a table definition into Hive eval Evaluate a SQL statement and display the results export Export an HDFS directory to a database table help List available commands import Import a table from a database to HDFS import-all-tables Import tables from a database to HDFS job Work with saved jobs list-databases List available databases on a server list-tables List available tables in a database merge Merge results of incremental imports metastore Run a standalone Sqoop metastore version Display version information See 'sqoop help COMMAND' for information on a specific command.
拷贝驱动到 /usr/lib/sqoop/lib
mysql jdbc 驱动下载地址
下载后,解压开找到驱动jar包,upload到服务器上,然后移过去
mv /home/alex/mysql-connector-java-5.1.34-bin.jar /usr/lib/sqoop/lib
导入
数据准备
在mysql里面建立一个表CREATE TABLE `employee` ( `id` int(11) NOT NULL, `name` varchar(20) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8;
插入几条数据
insert into employee (id,name) values (1,'michael'); insert into employee (id,name) values (2,'ted'); insert into employee (id,name) values (3,'jack');
导入mysql到hdfs
列出所有表
我们先不急着导入,先做几个准备步骤热身一下,也方便排查问题列出所有数据库
# sqoop list-databases --connect jdbc:mysql://localhost:3306/sqoop_test --username root --password root Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 14/12/01 09:20:28 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1 14/12/01 09:20:28 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 14/12/01 09:20:28 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. information_schema cacti metastore mysql sqoop_test wordpress zabbix
先用sqoop连接上数据库并列出所有表
# sqoop list-tables --connect jdbc:mysql://localhost/sqoop_test --username root --password root Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 14/11/28 11:46:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1 14/11/28 11:46:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 14/11/28 11:46:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. employee student workers
这条命令不用跟驱动的类名是因为sqoop默认支持mysql的,如果要跟jdbc驱动的类名用
# sqoop list-tables --connect jdbc:mysql://localhost/sqoop_test --username root --password root --driver com.mysql.jdbc.Driver
导入数据到hdfs
sqoop import --connect jdbc:mysql://localhost:3306/sqoop_test --username root --password root --table employee --m 1 --target-dir /user/test3
# sqoop import --connect jdbc:mysql://localhost:3306/sqoop_test --username root --password root --table employee --m 1 --target-dir /user/test Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 14/12/01 14:15:41 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1 14/12/01 14:15:41 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 14/12/01 14:15:41 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 14/12/01 14:15:41 INFO tool.CodeGenTool: Beginning code generation 14/12/01 14:15:42 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1 14/12/01 14:15:42 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1 14/12/01 14:15:42 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-root/compile/7b8091924ce8deb4f2ccae14c404a5bf/employee.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 14/12/01 14:15:45 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/7b8091924ce8deb4f2ccae14c404a5bf/employee.jar 14/12/01 14:15:45 WARN manager.MySQLManager: It looks like you are importing from mysql. 14/12/01 14:15:45 WARN manager.MySQLManager: This transfer can be faster! Use the --direct 14/12/01 14:15:45 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path. 14/12/01 14:15:45 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql) 14/12/01 14:15:45 INFO mapreduce.ImportJobBase: Beginning import of employee 14/12/01 14:15:46 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 14/12/01 14:15:47 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 14/12/01 14:15:47 INFO client.RMProxy: Connecting to ResourceManager at xmseapp01/10.172.78.111:8032 14/12/01 14:15:50 INFO db.DBInputFormat: Using read commited transaction isolation 14/12/01 14:15:51 INFO mapreduce.JobSubmitter: number of splits:1 14/12/01 14:15:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1406097234796_0019 14/12/01 14:15:52 INFO impl.YarnClientImpl: Submitted application application_1406097234796_0019 14/12/01 14:15:52 INFO mapreduce.Job: The url to track the job: http://xmseapp01:8088/proxy/application_1406097234796_0019/ 14/12/01 14:15:52 INFO mapreduce.Job: Running job: job_1406097234796_0019 14/12/01 14:16:08 INFO mapreduce.Job: Job job_1406097234796_0019 running in uber mode : false 14/12/01 14:16:08 INFO mapreduce.Job: map 0% reduce 0% 14/12/01 14:16:19 INFO mapreduce.Job: map 100% reduce 0% 14/12/01 14:16:20 INFO mapreduce.Job: Job job_1406097234796_0019 completed successfully 14/12/01 14:16:21 INFO mapreduce.Job: Counters: 30 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=99855 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=87 HDFS: Number of bytes written=16 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=1 Other local map tasks=1 Total time spent by all maps in occupied slots (ms)=8714 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=8714 Total vcore-seconds taken by all map tasks=8714 Total megabyte-seconds taken by all map tasks=8923136 Map-Reduce Framework Map input records=2 Map output records=2 Input split bytes=87 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=58 CPU time spent (ms)=1560 Physical memory (bytes) snapshot=183005184 Virtual memory (bytes) snapshot=704577536 Total committed heap usage (bytes)=148897792 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=16 14/12/01 14:16:21 INFO mapreduce.ImportJobBase: Transferred 16 bytes in 33.6243 seconds (0.4758 bytes/sec) 14/12/01 14:16:21 INFO mapreduce.ImportJobBase: Retrieved 2 records.
查看一下结果
# hdfs dfs -ls /user/test Found 2 items -rw-r--r-- 2 root supergroup 0 2014-12-01 14:16 /user/test/_SUCCESS -rw-r--r-- 2 root supergroup 16 2014-12-01 14:16 /user/test/part-m-00000 # hdfs dfs -cat /user/test/part-m-00000 1,michael 2,ted
我也不知道为什么mysql有3条数据,而导入了之后只有2条,有哪位懂的介绍下?
我遇到遇到的问题
如果你遇到以下问题14/12/01 10:12:42 INFO mapreduce.Job: Task Id : attempt_1406097234796_0017_m_000000_0, Status : FAILED Error: employee : Unsupported major.minor version 51.0
原因:sqoop是使用jdk1.7编译的,所以如果你用 ps aux| grep hadoop 看到hadoop用的是1.6运行的,那sqoop不能正常工作 注意:CDH4.7以上已经兼容jdk1.7 ,但如果你是从4.5升级上来的会发现hadoop用的是jdk1.6,需要修改一下整个hadoop调用的jdk为1.7,而且这是官方推荐的搭配
关于改jdk的方法
官方提供了2个方法 http://www.cloudera.com/content/cloudera/en/documentation/cdh4/latest/CDH4-Requirements-and-Supported-Versions/cdhrsv_topic_3.html这个是让你把 /usr/java/ 下建一个软链叫 default 指向你要的jdk,我这么做了,无效 http://www.cloudera.com/content/cloudera/en/documentation/archives/cloudera-manager-4/v4-5-3/Cloudera-Manager-Enterprise-Edition-Installation-Guide/cmeeig_topic_16_2.html
这个是叫你增加一个环境变量, 我这么做了,无效 最后我用了简单粗暴的办法:停掉所有相关服务,然后删掉那个该死的jdk1.6然后再重启,这回就用了 /usr/java/default 了
停掉所有hadoop相关服务的命令
for x in `cd /etc/init.d ; ls hive-*` ; do sudo service $x stop ; done for x in `cd /etc/init.d ; ls hbase-*` ; do sudo service $x stop ; done /etc/init.d/zookeeper-server stop for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
zookeeper , hbase, hive 如果你们没装就跳过。建议你们用ps aux | grep jre1.6 去找找有什么服务,然后一个一个关掉,先关其他的,最后关hadoop
启动所有
for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x start ; done /etc/init.d/zookeeper-server start for x in `cd /etc/init.d ; ls hbase-*` ; do sudo service $x start ; done for x in `cd /etc/init.d ; ls hive-*` ; do sudo service $x start ; done
从hdfs导出数据到mysql
接着这个例子做数据准备
清空employeetruncate employee
导出数据到mysql
# sqoop export --connect jdbc:mysql://localhost:3306/sqoop_test --username root --password root --table employee --m 1 --export-dir /user/test Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. 14/12/01 15:16:50 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1 14/12/01 15:16:50 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 14/12/01 15:16:51 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. 14/12/01 15:16:51 INFO tool.CodeGenTool: Beginning code generation 14/12/01 15:16:51 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1 14/12/01 15:16:52 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1 14/12/01 15:16:52 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce Note: /tmp/sqoop-root/compile/f4a75fdefe1eb604181d47d6bc827e48/employee.java uses or overrides a deprecated API. Note: Recompile with -Xlint:deprecation for details. 14/12/01 15:16:55 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/f4a75fdefe1eb604181d47d6bc827e48/employee.jar 14/12/01 15:16:55 INFO mapreduce.ExportJobBase: Beginning export of employee 14/12/01 15:16:55 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar 14/12/01 15:16:57 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative 14/12/01 15:16:57 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative 14/12/01 15:16:57 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 14/12/01 15:16:57 INFO client.RMProxy: Connecting to ResourceManager at xmseapp01/10.172.78.111:8032 14/12/01 15:17:00 INFO input.FileInputFormat: Total input paths to process : 1 14/12/01 15:17:00 INFO input.FileInputFormat: Total input paths to process : 1 14/12/01 15:17:00 INFO mapreduce.JobSubmitter: number of splits:1 14/12/01 15:17:00 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1406097234796_0021 14/12/01 15:17:01 INFO impl.YarnClientImpl: Submitted application application_1406097234796_0021 14/12/01 15:17:01 INFO mapreduce.Job: The url to track the job: http://xmseapp01:8088/proxy/application_1406097234796_0021/ 14/12/01 15:17:01 INFO mapreduce.Job: Running job: job_1406097234796_0021 14/12/01 15:17:13 INFO mapreduce.Job: Job job_1406097234796_0021 running in uber mode : false 14/12/01 15:17:13 INFO mapreduce.Job: map 0% reduce 0% 14/12/01 15:17:21 INFO mapreduce.Job: Task Id : attempt_1406097234796_0021_m_000000_0, Status : FAILED Error: java.io.IOException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'sqoop_test' at org.apache.sqoop.mapreduce.ExportOutputFormat.getRecordWriter(ExportOutputFormat.java:79) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'sqoop_test' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:377) at com.mysql.jdbc.Util.getInstance(Util.java:360) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:978) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:870) at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1659) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1206) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2234) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2265) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2064) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:790) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:377) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:395) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:325) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:215) at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302) at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.<init>(AsyncSqlRecordWriter.java:76) at org.apache.sqoop.mapreduce.ExportOutputFormat$ExportRecordWriter.<init>(ExportOutputFormat.java:95) at org.apache.sqoop.mapreduce.ExportOutputFormat.getRecordWriter(ExportOutputFormat.java:77) ... 8 more 14/12/01 15:17:29 INFO mapreduce.Job: Task Id : attempt_1406097234796_0021_m_000000_1, Status : FAILED Error: java.io.IOException: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'sqoop_test' at org.apache.sqoop.mapreduce.ExportOutputFormat.getRecordWriter(ExportOutputFormat.java:79) at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:624) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:744) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163) Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown database 'sqoop_test' at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:377) at com.mysql.jdbc.Util.getInstance(Util.java:360) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:978) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:870) at com.mysql.jdbc.MysqlIO.proceedHandshakeWithPluggableAuthentication(MysqlIO.java:1659) at com.mysql.jdbc.MysqlIO.doHandshake(MysqlIO.java:1206) at com.mysql.jdbc.ConnectionImpl.coreConnect(ConnectionImpl.java:2234) at com.mysql.jdbc.ConnectionImpl.connectOneTryOnly(ConnectionImpl.java:2265) at com.mysql.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:2064) at com.mysql.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:790) at com.mysql.jdbc.JDBC4Connection.<init>(JDBC4Connection.java:44) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:377) at com.mysql.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:395) at com.mysql.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:325) at java.sql.DriverManager.getConnection(DriverManager.java:571) at java.sql.DriverManager.getConnection(DriverManager.java:215) at org.apache.sqoop.mapreduce.db.DBConfiguration.getConnection(DBConfiguration.java:302) at org.apache.sqoop.mapreduce.AsyncSqlRecordWriter.<init>(AsyncSqlRecordWriter.java:76) at org.apache.sqoop.mapreduce.ExportOutputFormat$ExportRecordWriter.<init>(ExportOutputFormat.java:95) at org.apache.sqoop.mapreduce.ExportOutputFormat.getRecordWriter(ExportOutputFormat.java:77) ... 8 more 14/12/01 15:17:40 INFO mapreduce.Job: map 100% reduce 0% 14/12/01 15:17:41 INFO mapreduce.Job: Job job_1406097234796_0021 completed successfully 14/12/01 15:17:41 INFO mapreduce.Job: Counters: 32 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=99542 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=139 HDFS: Number of bytes written=0 HDFS: Number of read operations=4 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=2 Launched map tasks=3 Other local map tasks=2 Rack-local map tasks=1 Total time spent by all maps in occupied slots (ms)=21200 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=21200 Total vcore-seconds taken by all map tasks=21200 Total megabyte-seconds taken by all map tasks=21708800 Map-Reduce Framework Map input records=2 Map output records=2 Input split bytes=120 Spilled Records=0 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=86 CPU time spent (ms)=1330 Physical memory (bytes) snapshot=177094656 Virtual memory (bytes) snapshot=686768128 Total committed heap usage (bytes)=148897792 File Input Format Counters Bytes Read=0 File Output Format Counters Bytes Written=0 14/12/01 15:17:41 INFO mapreduce.ExportJobBase: Transferred 139 bytes in 43.6687 seconds (3.1831 bytes/sec) 14/12/01 15:17:41 INFO mapreduce.ExportJobBase: Exported 2 records.
那一串异常我也不知道为什么会有?!反正最后去mysql看成功导出了2条数据
mysql> select * from employee; +----+---------+ | id | name | +----+---------+ | 1 | michael | | 2 | ted | +----+---------+ 2 rows in set (0.00 sec)
好,下课!

Outils d'IA chauds

Undresser.AI Undress
Application basée sur l'IA pour créer des photos de nu réalistes

AI Clothes Remover
Outil d'IA en ligne pour supprimer les vêtements des photos.

Undress AI Tool
Images de déshabillage gratuites

Clothoff.io
Dissolvant de vêtements AI

AI Hentai Generator
Générez AI Hentai gratuitement.

Article chaud

Outils chauds

Bloc-notes++7.3.1
Éditeur de code facile à utiliser et gratuit

SublimeText3 version chinoise
Version chinoise, très simple à utiliser

Envoyer Studio 13.0.1
Puissant environnement de développement intégré PHP

Dreamweaver CS6
Outils de développement Web visuel

SublimeText3 version Mac
Logiciel d'édition de code au niveau de Dieu (SublimeText3)

Dewu APP est actuellement un logiciel d'achat de marque très populaire, mais la plupart des utilisateurs ne savent pas comment utiliser les fonctions de Dewu APP. Le guide didacticiel d'utilisation le plus détaillé est compilé ci-dessous. Ensuite, l'éditeur présente Dewuduo aux utilisateurs. tutoriels. Les utilisateurs intéressés peuvent venir y jeter un oeil ! Tutoriel sur l'utilisation de Dewu [2024-03-20] Comment utiliser l'achat à tempérament Dewu [2024-03-20] Comment obtenir des coupons Dewu [2024-03-20] Comment trouver le service client manuel Dewu [2024-03- 20] Comment vérifier le code de ramassage de Dewu [2024-03-20] Où trouver l'achat de Dewu [2024-03-20] Comment ouvrir Dewu VIP [2024-03-20] Comment demander le retour ou l'échange de Dewu

L'installation d'applications Android sur Linux a toujours été une préoccupation pour de nombreux utilisateurs. Surtout pour les utilisateurs Linux qui aiment utiliser des applications Android, il est très important de maîtriser comment installer des applications Android sur les systèmes Linux. Bien qu'exécuter des applications Android directement sur Linux ne soit pas aussi simple que sur la plateforme Android, en utilisant des émulateurs ou des outils tiers, nous pouvons toujours profiter avec plaisir des applications Android sur Linux. Ce qui suit explique comment installer des applications Android sur les systèmes Linux.

Après la pluie en été, vous pouvez souvent voir une scène météorologique spéciale magnifique et magique : l'arc-en-ciel. C’est aussi une scène rare que l’on peut rencontrer en photographie, et elle est très photogénique. Il y a plusieurs conditions pour qu’un arc-en-ciel apparaisse : premièrement, il y a suffisamment de gouttelettes d’eau dans l’air, et deuxièmement, le soleil brille sous un angle plus faible. Par conséquent, il est plus facile de voir un arc-en-ciel l’après-midi, après que la pluie s’est dissipée. Cependant, la formation d'un arc-en-ciel est grandement affectée par les conditions météorologiques, la lumière et d'autres conditions, de sorte qu'il ne dure généralement que peu de temps, et la meilleure durée d'observation et de prise de vue est encore plus courte. Alors, lorsque vous rencontrez un arc-en-ciel, comment pouvez-vous l'enregistrer correctement et prendre des photos de qualité ? 1. Recherchez les arcs-en-ciel En plus des conditions mentionnées ci-dessus, les arcs-en-ciel apparaissent généralement dans la direction de la lumière du soleil, c'est-à-dire que si le soleil brille d'ouest en est, les arcs-en-ciel sont plus susceptibles d'apparaître à l'est.

Si vous avez utilisé Docker, vous devez comprendre les démons, les conteneurs et leurs fonctions. Un démon est un service qui s'exécute en arrière-plan lorsqu'un conteneur est déjà utilisé dans n'importe quel système. Podman est un outil de gestion gratuit permettant de gérer et de créer des conteneurs sans recourir à un démon tel que Docker. Par conséquent, il présente des avantages dans la gestion des conteneurs sans nécessiter de services backend à long terme. De plus, Podman ne nécessite pas d'autorisations au niveau racine pour être utilisé. Ce guide explique en détail comment installer Podman sur Ubuntu24. Pour mettre à jour le système, nous devons d'abord mettre à jour le système et ouvrir le shell du terminal d'Ubuntu24. Pendant les processus d’installation et de mise à niveau, nous devons utiliser la ligne de commande. un simple

1. Ouvrez d’abord WeChat. 2. Cliquez sur [+] dans le coin supérieur droit. 3. Cliquez sur le code QR pour collecter le paiement. 4. Cliquez sur les trois petits points dans le coin supérieur droit. 5. Cliquez pour fermer le rappel vocal de l'arrivée du paiement.

Durant leurs études au lycée, certains élèves prennent des notes très claires et précises, prenant plus de notes que d’autres dans la même classe. Pour certains, prendre des notes est un passe-temps, tandis que pour d’autres, c’est une nécessité lorsqu’ils oublient facilement de petites informations sur quelque chose d’important. L'application NTFS de Microsoft est particulièrement utile pour les étudiants qui souhaitent sauvegarder des notes importantes au-delà des cours réguliers. Dans cet article, nous décrirons l'installation des applications Ubuntu sur Ubuntu24. Mise à jour du système Ubuntu Avant d'installer le programme d'installation d'Ubuntu, sur Ubuntu24, nous devons nous assurer que le système nouvellement configuré a été mis à jour. Nous pouvons utiliser le "a" le plus célèbre du système Ubuntu

Étapes détaillées pour installer le langage Go sur un ordinateur Win7 Go (également connu sous le nom de Golang) est un langage de programmation open source développé par Google. Il est simple, efficace et offre d'excellentes performances de concurrence. Il convient au développement de services cloud, d'applications réseau et. systèmes back-end. Installer le langage Go sur un ordinateur Win7 permet de prendre rapidement en main le langage et de commencer à écrire des programmes Go. Ce qui suit présentera en détail les étapes pour installer le langage Go sur un ordinateur Win7 et joindra des exemples de code spécifiques. Étape 1 : Téléchargez le package d'installation du langage Go et visitez le site officiel de Go

PhotoshopCS est l'abréviation de Photoshop Creative Suite. C'est un logiciel produit par Adobe et est largement utilisé dans la conception graphique et le traitement d'images. En tant que novice apprenant PS, laissez-moi vous expliquer aujourd'hui ce qu'est le logiciel photoshopcs5 et comment l'utiliser. 1. Qu'est-ce que Photoshop CS5 ? Adobe Photoshop CS5 Extended est idéal pour les professionnels des domaines du cinéma, de la vidéo et du multimédia, les graphistes et web designers qui utilisent la 3D et l'animation, ainsi que les professionnels des domaines de l'ingénierie et des sciences. Rendu une image 3D et fusionnez-la dans une image composite 2D. Modifiez facilement des vidéos
