1. There are the following errors in hadoop-root-datanode-master.log:
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java .io.IOException: Incompatible namespaceIDs in
causes the datanode to fail to start.
Cause: Each namenode format will re-create a namenodeId, and the directory configured with the dfs.data.dir parameter contains the id created by the last format, which is inconsistent with the id in the directory configured with the dfs.name.dir parameter. . namenode format clears the data under the namenode, but does not clear the data under the datanode, resulting in a failure at startup. All you need to do is to clear the directory configured with the dfs.data.dir parameter before each fotmat.
Command to format hdfs
3. When uploading files from local to hdfs file system, the following error occurs:
INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink
INFO hdfs.DFSClient: Abandoning block blk_-1300529705803292651_37023
WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block.
Solution:
Turn off the firewall:
4. Errors caused by safe mode
org.apache.hadoop.dfs.SafeModeException: Cannot delete ..., Name node is in safe mode
When the distributed file system is started, there will be a safe mode at the beginning. When the distributed file system is in safe mode, the contents of the file system are not allowed to be modified or deleted until the safe mode ends. The safe mode is mainly used to check the validity of data blocks on each DataNode when the system starts, and to copy or delete some data blocks as necessary according to the policy. You can also enter safe mode through commands during runtime. In practice, when the system is started, when modifying or deleting files, there will be an error message that the safe mode does not allow modification, and you only need to wait for a while.