Summary -
In this topic, we described about the below sections -
Hadoop follows a Master-Slave architecture which is comprised of 2 daemons majorly. A daemon is a back-ground service that runs on Hadoop.
The major two daemons are -
- Master Daemons
- Slave Daemons
These two daemons are divided into sub nodes like below.
- Master Daemons
- Name Node
- Secondary Name Node
- Job Tracker
- Slave Daemons
- Data Node
- Task Tracker
Let’s discuss on each node in detail.
Name Node -
The Name Node is the center piece of an HDFS file system. The Name Node is major and a Master Node in Hadoop Architecture. Name Node keeps the directory tree of all files in the file system and tracks where the file is kept across the cluster.
Name node does not store the any of these files data itself. Client applications talk to the Name Node whenever they wish to add/ copy/ move/ delete/ locate a file. The Name Node is responsible for maintaining the meta information of Hadoop file system.
The Name Node responds the successful requests by returning a list of relevant data Node servers where the data lives. The Name Node is a Single Point of Failure for the HDFS Cluster. When the Name Node goes down, the file system goes offline.
Name Node maintains the file system and the file system has the meta data for all files and directories. The Name node executes the file system called name space and all file operations. The Name Node will update two important permanent files.
- Fsname space image
- Edit log
Secondary Name Node -
Secondary name node is deprecated and performs periodic check points of the name space. Secondary name node helps to keep the file size containing HDFS modifications log within certain limits at name node. Secondary name node replaced by check point node.
Whenever the primary node is down, the Secondary name node will come into the picture. The Name Node stores modifications to the file system as a log appended to a native file, System file, edits. When a Name Node starts up, it reads HDFS state from an image file, fsimage and then applies edits from the edits log file. It then writes new HDFS state to the fsimage and starts normal operation with an empty edits file.
The secondary Name Node merges the fsimage and the edits log files periodically and keeps edits log size within a limit.
Secondary name node is not exactly replacement of primary node. It is usually run on a different machine than the primary Name Node since its memory requirements are on the same order as the primary Name Node. The start of the checkpoint process on the secondary Name Node is controlled by two Configuration parameters.
- ·fs.checkpoint.period, set to 1 hour by default, specifies the maximum delay between two consecutive checkpoints.
- ·fs.checkpoint.size, set to 64MB by default, defines the size of the edits log file that forces an urgent checkpoint even if the maximum checkpoint delay is not reached.
The secondary Name Node stores the latest checkpoint in a directory which is structured the same way as the primary Name Node’s directory.
Task Tracker -
A Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations from a Job Tracker. Task tracker is responsible for instantiating & monitoring individual map and reduces work. Task Tracker is also known as s/w daemon for Hadoop architecture.
Every Task Tracker is configured with a set of slots. The slot indicates the number of tasks that it can accept. The task manager primarily responsible for executing the tasks assigned by the job tracker in the form of MRJobs. In general, Task Manager will reside on top of data nodes.
Job Tracker -
The Job Tracker is the service within Hadoop and farms out Map Reduce tasks to specific nodes in the cluster, ideally the nodes that have the data, or at least are in the same rack.
Job tracker is responsible for scheduling and rescheduling and the tasks are in the form of Map reduce jobs. Job tracker will get the response/acknowledgement back from the task tracker.
In general, Job tracker will reside on top of the Name Node. Job Tracker manages the map reduce tasks and distributes individual tasks to machine running the task tracker. Client applications submit jobs to the Job tracker.
The Job Tracker talks to the Name Node to determine the data location. The Job Tracker locates Task Tracker nodes with available slots at or near the data. The Job Tracker submits the work to the chosen Task Tracker nodes.
If they do not submit heartbeat signals often, a Task Tracker will notify the Job Tracker about the task failure. The Job Tracker decides what to do like it may resubmit the job elsewhere Or it may mark that specific record as something to avoid Or it may even blacklist the Task Tracker as unreliable.
When the work is completed, the Job Tracker updates its status. Client applications can poll the Job Tracker for information. The Job Tracker is a point of failure for the Hadoop Map Reduce service. If it goes down, all running jobs are halted.
Data Node -
A Data Node stores data in the Hadoop File System and is the place to hold the data. A functional file system has more than one Data Node with data distributed across the Data Nodes. Actual data in data nodes only in the form of HDFS blocks and by default, each block size is 64MB.
In the beginning, Data Node connects to the Name Node and establishes the service. Then Data Node responds to requests from the Name Node for file system operations. Data Nodes are store and retrieve blocks, reporting name nodes.
Client applications can talk directly to a Data Node by using the location of the data provided by Name Node. Task Tracker instances can indeed should be deployed on the same servers that host Data Node instance exists. A client accesses the file system on behalf of the user by communicating with data nodes.
There is usually no need to use RAID storage for Data Node data. Data is designed to be replicated across multiple servers, rather than multiple disks on the same server.
An ideal configuration is for a server to have a Data Node, a Task Tracker, and then physical disks one Task Tracker slot per CPU. This will allow every Task Tracker 100% of a CPU and separate disks to read and write data.