Keep Track of Data Blocks with NameNode in HDFS - dummies

Keep Track of Data Blocks with NameNode in HDFS

By Dirk deRoos

The NameNode acts as the address book for Hadoop Distributed File System (HDFS) because it knows not only which blocks make up individual files but also where each of these blocks and their replicas are stored. When a user stores a file in HDFS, the file is divided into data blocks, and three copies of these data blocks are stored in slave nodes throughout the Hadoop cluster.

That’s a lot of data blocks to keep track of. As you might expect, knowing where the bodies are buried makes the NameNode a critically important component in a Hadoop cluster. If the NameNode is unavailable, applications cannot access any data stored in HDFS.

If you take a look at the following figure, you can see the NameNode daemon running on a master node server. All mapping information dealing with the data blocks and their corresponding files is stored in a file named.

image0.jpg

HDFS is a journaling file system, which means that any data changes are logged in an edit journal that tracks events since the last checkpoint — the last time when the edit log was merged with. In HDFS, the edit journal is maintained in a file named that’s stored on the NameNode.

NameNode startup and operation

To understand how the NameNode works, it’s helpful to take a look at how it starts up. Because the purpose of the NameNode is to inform applications of how many data blocks they need to process and to keep track of the exact location where they’re stored, it needs all the block locations and block-to-file mappings that are available in RAM.

These are the steps the NameNode takes. To load all the information that the NameNode needs after it starts up, the following happens:

  1. The NameNode loads the file into memory.

  2. The NameNode loads the file and re-plays the journaled changes to update the block metadata that’s already in memory.

  3. The DataNode daemons send the NameNode block reports.

    For each slave node, there’s a block report that lists all the data blocks stored there and describes the health of each one.

After the startup process is completed, the NameNode has a complete picture of all the data stored in HDFS, and it’s ready to receive application requests from Hadoop clients.

As data files are added and removed based on client requests, the changes are written to the slave node’s disk volumes, journal updates are made to the file, and the changes are reflected in the block locations and metadata stored in the NameNode’s memory.

image1.jpg

Throughout the life of the cluster, the DataNode daemons send the NameNode heartbeats (a quick signal) every three seconds, indicating they’re active. (This default value is configurable.) Every six hours (again, a configurable default), the DataNodes send the NameNode a block report outlining which file blocks are on their nodes. This way, the NameNode always has a current view of the available resources in the cluster.

Writing data

To create new files in HDFS, the following process would have to take place:

  1. The client sends a request to the NameNode to create a new file.

    The NameNode determines how many blocks are needed, and the client is granted a lease for creating these new file blocks in the cluster. As part of this lease, the client has a time limit to complete the creation task. (This time limit ensures that storage space isn’t taken up by failed client applications.)

  2. The client then writes the first copies of the file blocks to the slave nodes using the lease assigned by the NameNode.

    The NameNode handles write requests and determines where the file blocks and their replicas need to be written, balancing availability and performance. The first copy of a file block is written in one rack, and the second and third copies are written on a different rack than the first copy, but in different slave nodes in the same rack. This arrangement minimizes network traffic while ensuring that no data blocks are on the same failure point.

  3. As each block is written to HDFS, a special process writes the remaining replicas to the other slave nodes identified by the NameNode.

  4. After the DataNode daemons acknowledge the file block replicas have been created, the client application closes the file and notifies the NameNode, which then closes the open lease.

Reading Data

To read files from HDFS, the following process would have to take place:

  1. The client sends a request to the NameNode for a file.

    The NameNode determines which blocks are involved and chooses, based on overall proximity of the blocks to one another and to the client, the most efficient access path.

  2. The client then accesses the blocks using the addresses given by the NameNode.

Balancing data in the Hadoop cluster

Over time, with combinations of uneven data-ingestion patterns (where some slave nodes might have more data written to them) or node failures, data is likely to become unevenly distributed across the racks and slave nodes in your Hadoop cluster.

This uneven distribution can have a detrimental impact on performance because the demand on individual slave nodes will become unbalanced; nodes with little data won’t be fully used; and nodes with many blocks will be overused. (Note: The overuse and underuse are based on disk activity, not on CPU or RAM.)

HDFS includes a balancer utility to redistribute blocks from overused slave nodes to underused ones while maintaining the policy of putting blocks on different slave nodes and racks. Hadoop administrators should regularly check HDFS health, and if data becomes unevenly distributed, they should invoke the balancer utility.

NameNode master server design

Because of its mission-critical nature, the master server running the NameNode daemon needs markedly different hardware requirements than the ones for a slave node. Most significantly, enterprise-level components need to be used to minimize the probability of an outage. Also, you’ll need enough RAM to load into memory all the metadata and location data about all the data blocks stored in HDFS.