Slave Node and Disk Failures in HDFS - dummies

Slave Node and Disk Failures in HDFS

By Dirk deRoos

Like death and taxes, disk failures (and given enough time, even node or rack failures), are inevitable in Hadoop Distributed File System (HDFS). In the example shown, even if one rack were to fail, the cluster could continue functioning. Performance would suffer because you’ve lost half your processing resources, but the system is still online and all data is still available.

image0.jpg

In a scenario where a disk drive or a slave node fails, the central metadata server for HDFS (called the NameNode) eventually finds out that the file blocks stored on the failed resource are no longer available. For example, if Slave Node 3 fails, it would mean that Blocks A, C, and D are underreplicated.

In other words, too few copies of these blocks are available in HDFS. When HDFS senses that a block is underreplicated, it orders a new copy.

To continue the example, say that Slave Node 3 comes back online after a few hours. Meanwhile, HDFS has ensured that there are three copies of all the file blocks. So now, Blocks A, C, and D have four copies apiece and are overreplicated. As with underreplicated blocks, the HDFS central metadata server will find out about this as well, and will order one copy of every file to be deleted.

One nice result of the availability of data is that when disk failures do occur, there’s no need to immediately replace failed hard drives. This can more effectively be done at regularly scheduled intervals.