Detailed explanation of common hadoop errors and how to deal with them_PHP tutorial

WBOY
Release: 2016-07-21 15:05:36
Original
984 people have browsed it

1. There are the following errors in hadoop-root-datanode-master.log:
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java .io.IOException: Incompatible namespaceIDs in
causes the datanode to fail to start.
Cause: Each namenode format will re-create a namenodeId, and the directory configured with the dfs.data.dir parameter contains the id created by the last format, which is inconsistent with the id in the directory configured with the dfs.name.dir parameter. . namenode format clears the data under the namenode, but does not clear the data under the datanode, resulting in a failure at startup. All you need to do is to clear the directory configured with the dfs.data.dir parameter before each fotmat.
Command to format hdfs

Copy code The code is as follows:

hadoop namenode -format

2. If the datanode cannot connect to the namenode, the datanode cannot start.
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to ... failed on local exception: java.net.NoRouteToHostException: No route to host
Turn off the firewall
Copy the code The code is as follows:

service iptables stop

machine After restarting, the firewall will still be enabled.

3. When uploading files from local to hdfs file system, the following error occurs:
INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink
INFO hdfs.DFSClient: Abandoning block blk_-1300529705803292651_37023
WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
Solution:
Turn off the firewall:

Copy code The code is as follows:

service iptables stop

Disable selinux:
Edit the /etc/selinux/config file and set "SELINUX=disabled"

4. Errors caused by safe mode
org.apache.hadoop.dfs.SafeModeException: Cannot delete ..., Name node is in safe mode
When the distributed file system is started, there will be a safe mode at the beginning. When the distributed file system is in safe mode, the contents of the file system are not allowed to be modified or deleted until the safe mode ends. The safe mode is mainly used to check the validity of data blocks on each DataNode when the system starts, and to copy or delete some data blocks as necessary according to the policy. You can also enter safe mode through commands during runtime. In practice, when the system is started, when modifying or deleting files, there will be an error message that the safe mode does not allow modification, and you only need to wait for a while.

Copy code The code is as follows:

hadoop dfsadmin -safemode leave

Turn off safe mode

www.bkjia.comtruehttp: //www.bkjia.com/PHPjc/327683.htmlTechArticle1. There are the following errors in hadoop-root-datanode-master.log: ERROR org.apache.hadoop.hdfs .server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in causes the datanode to start...
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!