Monday, January 6, 2014

Hadoop Interview Questions – MapReduce


Hadoop Interview Questions – MapReduce


Looking out for Hadoop Interview Questions that are frequently asked by employers?

What is MapReduce?
It is a framework or a programming model that is used for processing large data sets over clusters of computers using distributed programming.

What are ‘maps’ and ‘reduces’?
‘Maps’ and ‘Reduces’ are two phases of solving a query in HDFS. ‘Map’ is responsible to read data from input location, and based on the input type, it will generate a key value pair,that is, an intermediate output in local machine.’Reducer’ is responsible to process the intermediate output received from the mapper and generate the final output.

What are the four basic parameters of a mapper?
The four basic parameters of a mapper are LongWritable, text, text and IntWritable. The first two represent input parameters and the second two represent intermediate output parameters.

What are the four basic parameters of a reducer?
The four basic parameters of a reducer are Text, IntWritable, Text, IntWritable.The first two represent intermediate output parameters and the second two represent final output parameters.

What do the master class and the output class do?
Master is defined to update the Master or the job tracker and the output class is defined to write data onto the output location.

What is the input type/format in MapReduce by default?
By default the type input type in MapReduce is ‘text’.

Is it mandatory to set input and output type/format in MapReduce?
No, it is not mandatory to set the input and output type/format in MapReduce. By default, the cluster takes the input and the output type as ‘text’.

What does the text input format do?
In text input format, each line will create a line object, that is an hexa-decimal number. Key is considered as a line object and value is considered as a whole line text. This is how the data gets processed by a mapper. The mapper will receive the ‘key’ as a ‘LongWritable’ parameter and value as a ‘Text’ parameter.

What does job conf class do?
MapReduce needs to logically separate different jobs running on the same cluster. ‘Job conf class’ helps to do job level settings such as declaring a job in real environment. It is recommended that Job name should be descriptive and represent the type of job that is being executed.

What does conf.setMapper Class do?
Conf.setMapperclass sets the mapper class and all the stuff related to map job such as reading a data and generating a key-value pair out of the mapper.

What do sorting and shuffling do?
Sorting and shuffling are responsible for creating a unique key and a list of values.Making similar keys at one location is known as Sorting. And the process by which the intermediate output of the mapper is sorted and sent across to the reducers is known as Shuffling.

 What does a split do?
Before transferring the data from hard disk location to map method, there is a phase or method called the ‘Split Method’. Split method pulls a block of data from HDFS to the framework. The Split class does not write anything, but reads data from the block and pass it to the mapper.Be default, Split is taken care by the framework. Split method is equal to the block size and is used to divide block into bunch of splits.

How can we change the split size if our commodity hardware has less storage space?
If our commodity hardware has less storage space, we can change the split size by writing the ‘custom splitter’. There is a feature of customization in Hadoop which can be called from the main method.

What does a MapReduce partitioner do?
A MapReduce partitioner makes sure that all the value of a single key goes to the same reducer, thus allows evenly distribution of the map output over the reducers. It redirects the mapper output to the reducer by determining which reducer is responsible for a particular key.

How is Hadoop different from other data processing tools?
In Hadoop, based upon your requirements, you can increase or decrease the number of mappers without bothering about the volume of data to be processed. This is the beauty of parallel processing in contrast to the other data processing tools available.

Can we rename the output file?
Yes we can rename the output file by implementing multiple format output class.

Why we cannot do aggregation (addition) in a mapper? Why we require reducer for that?
We cannot do aggregation (addition) in a mapper because, sorting is not done in a mapper. Sorting happens only on the reducer side. Mapper method initialization depends upon each input split. While doing aggregation, we will lose the value of the previous instance. For each row, a new mapper will get initialized. For each row, inputsplit again gets divided into mapper, thus we do not have a track of the previous row value.

What is Streaming?
Streaming is a feature with Hadoop framework that allows us to do programming using MapReduce in any programming language which can accept standard input and can produce standard output. It could be Perl, Python, Ruby and not necessarily be Java. However, customization in MapReduce can only be done using Java and not any other programming language.

What is a Combiner?
A ‘Combiner’ is a mini reducer that performs the local reduce task. It receives the input from the mapper on a particular node and sends the output to the reducer. Combiners help in enhancing the efficiency of MapReduce by reducing the quantum of data that is required to be sent to the reducers.

What is the difference between an HDFS Block and Input Split?
HDFS Block is the physical division of the data and Input Split is the logical division of the data.

What happens in a TextInputFormat?
In TextInputFormat, each line in the text file is a record. Key is the byte offset of the line and value is the content of the line.
For instance,Key: LongWritable, value: Text.

What do you know about KeyValueTextInputFormat?
In KeyValueTextInputFormat, each line in the text file is a ‘record’. The first separator character divides each line. Everything before the separator is the key and everything after the separator is the value.
For instance, Key: Text, value: Text.

What do you know about SequenceFileInputFormat?
SequenceFileInputFormat is an input format for reading in sequence files. Key and value are user defined. It is a specific compressed binary file format which is optimized for passing the data between the output of one MapReduce job to the input of some other MapReduce job.

What do you know about NLineOutputFormat?
NLineOutputFormat splits ‘n’ lines of input as one split.

What is a JobTracker in Hadoop? How many instances of JobTracker run on a Hadoop Cluster?
JobTracker is the daemon service for submitting and tracking MapReduce jobs in Hadoop. There is only One Job Tracker process run on any hadoop cluster. Job Tracker runs on its own JVM process. In a typical production cluster its run on a separate machine. Each slave node is configured with job tracker node location. The JobTracker is single point of failure for the Hadoop MapReduce service. If it goes down, all running jobs are halted. JobTracker in Hadoop performs following actions(from Hadoop Wiki:)
Client applications submit jobs to the Job tracker.
The JobTracker talks to the NameNode to determine the location of the data
The JobTracker locates TaskTracker nodes with available slots at or near the data
The JobTracker submits the work to the chosen TaskTracker nodes.
The TaskTracker nodes are monitored. If they do not submit heartbeat signals often enough, they are deemed to have failed and the work is scheduled on a different TaskTracker.
A TaskTracker will notify the JobTracker when a task fails. The JobTracker decides what to do then: it may resubmit the job elsewhere, it may mark that specific record as something to avoid, and it may may even blacklist the TaskTracker as unreliable.
When the work is completed, the JobTracker updates its status.

 

Friday, November 29, 2013

Hadoop Interview Questions – Setting up Hadoop Cluster!

Hadoop Interview Questions – Setting up Hadoop Cluster!




Looking out for Hadoop Interview Questions that are frequently asked by employers? Here is the s list of Hadoop Interview Questions which covers setting up Hadoop Cluster…
Which are the three modes in which Hadoop can be run?
The three modes in which Hadoop can be run are:
1. standalone (local) mode
2. Pseudo-distributed mode
3. Fully distributed mode
What are the features of Stand alone (local) mode?
In stand-alone mode there are no daemons, everything runs on a single JVM. It has no DFS and utilizes the local file system. Stand-alone mode is suitable only for running MapReduce programs during development. It is one of the most least used environments.
 What are the features of Pseudo mode?
Pseudo mode is used both for development and in the QA environment. In the Pseudo mode all the daemons run on the same machine.
 Can we call VMs as pseudos?
No, VMs are not pseudos because VM is something different and pesudo is very specific to Hadoop.
 What are the features of Fully Distributed mode?
Fully Distributed mode is used in the production environment, where we have ‘n’ number of machines forming a Hadoop cluster. Hadoop daemons run on a cluster of machines.
There is one host onto which Namenode is running and another host on which datanode is running and then there are machines on which task tracker is running. We have separate masters and separate slaves in this distribution.
Does Hadoop follows the UNIX pattern?
Yes, Hadoop closely follows the UNIX pattern. Hadoop also has the ‘conf‘ directory as in the case of UNIX.
In which directory Hadoop is installed?
Cloudera and Apache has the same directory structure. Hadoop is installed in cd/usr/lib/hadoop-0.20/.
What are the port numbers of Namenode, job tracker and task tracker?
The port number for Namenode is ’70′, for job tracker is ’30′ and for task tracker is ’60′.
What is the Hadoop-core configuration?
Hadoop core is configured by two xml files:
1. hadoop-default.xml which was renamed to 2. hadoop-site.xml.
These files are written in xml format. We have certain properties in these xml files, which consist of name and value. But these files do not exist now.
 What are the Hadoop configuration files at present?
There are 3 configuration files in Hadoop:
1. core-site.xml
2. hdfs-site.xml
3. mapred-site.xml
These files are located in the conf/ subdirectory.
How to exit the Vi editor?
To exit the Vi Editor, press ESC and type :q and then press enter.
 What is a spill factor with respect to the RAM?
Spill factor is the size after which your files move to the temp file. Hadoop-temp directory is used for this.
 Is fs.mapr.working.dir a single directory?
Yes, fs.mapr.working.dir it is just one directory.
 Which are the three main hdfs-site.xml properties?
The three main hdfs-site.xml properties are:
1. dfs.name.dir which gives you the location on which metadata will be stored and where DFS is located – on disk or onto the remote.
2. dfs.data.dir which gives you the location where the data is going to be stored.
3. fs.checkpoint.dir which is for secondary Namenode.
 How to come out of the insert mode?
To come out of the insert mode, press ESC, type :q (if you have not written anything) OR type :wq (if you have written anything in the file) and then press ENTER.
What is Cloudera and why it is used?
Cloudera is the distribution of Hadoop. It is a user created on VM by default. Cloudera belongs to Apache and is used for data processing.
 What happens if you get a ‘connection refused java exception’ when you type hadoopfsck /?
It could mean that the Namenode is not working on your VM.
We are using Ubuntu operating system with Cloudera, but from where we can download Hadoop or does it come by default with Ubuntu?
This is a default configuration of Hadoop that you have to download from Cloudera or from Edureka’s dropbox and the run it on your systems. You can also proceed with your own configuration but you need a Linux box, be it Ubuntu or Red hat. There are installation steps present at the Cloudera location or in Edureka’s Drop box. You can go either ways.
 What does ‘jps’ command do?
This command checks whether your Namenode, datanode, task tracker, job tracker, etc are working or not.
 How can I restart Namenode?
1. Click on stop-all.sh and then click on start-all.sh OR
2. Write sudo hdfs (press enter), su-hdfs (press enter), /etc/init.d/ha (press enter) and then /etc/init.d/hadoop-0.20-namenode start (press enter).
 What is the full form of fsck?
Full form of fsck is File System Check.
 How can we check whether Namenode is working or not?
To check whether Namenode is working or not, use the command /etc/init.d/hadoop-0.20-namenode status or as simple as jps.
 What does the command mapred.job.tracker do?
The command mapred.job.tracker lists out which of your nodes is acting as a job tracker.
 What does /etc /init.d do?
/etc /init.d specifies where daemons (services) are placed or to see the status of these daemons. It is very LINUX specific, and nothing to do with Hadoop.
 How can we look for the Namenode in the browser?
If you have to look for Namenode in the browser, you don’t have to give localhost:8021, the port number to look for Namenode in the brower is 50070.
 How to change from SU to Cloudera?
To change from SU to Cloudera just type exit.
 Which files are used by the startup and shutdown commands?
Slaves and Masters are used by the startup and the shutdown commands.
 What do slaves consist of?
Slaves consist of a list of hosts, one per line, that host datanode and task tracker servers.
 What do masters consist of?
Masters contain a list of hosts, one per line, that are to host secondary namenode servers.
 What does hadoop-env.sh do?
hadoop-env.sh provides the environment for Hadoop to run. JAVA_HOME is set over here.
 Can we have multiple entries in the master files?
Yes, we can have multiple entries in the Master files.
Where is hadoop-env.sh file present?
hadoop-env.sh file is present in the conf location.
 In Hadoop_PID_DIR, what does PID stands for?
PID stands for ‘Process ID’.
 What does /var/hadoop/pids do?
It stores the PID.
 What does hadoop-metrics.properties file do?
hadoop-metrics.properties is used for ‘Reporting‘ purposes. It controls the reporting for Hadoop. The default status is ‘not to report‘.
What are the network requirements for Hadoop?
The Hadoop core uses Shell (SSH) to launch the server processes on the slave nodes. It requires password-less SSH connection between the master and all the slaves and the secondary machines.
Why do we need a password-less SSH in Fully Distributed environment?
We need a password-less SSH in a Fully-Distributed environment because when the cluster is LIVE and running in Fully Distributed environment, the communication is too frequent. The job tracker should be able to send a task to task tracker quickly.
 Does this lead to security issues?
No, not at all. Hadoop cluster is an isolated cluster. And generally it has nothing to do with an internet. It has a different kind of a configuration. We needn’t worry about that kind of a security breach, for instance, someone hacking through the internet, and so on. Hadoop has a very secured way to connect to other machines to fetch and to process data.
On which port does SSH work?
SSH works on Port No. 22, though it can be configured. 22 is the default Port number.
 Can you tell us more about SSH?
SSH is nothing but a secure shell communication, it is a kind of a protocol that works on a Port No. 22, and when you do an SSH, what you really require is a password.
 Why password is needed in SSH localhost?
Password is required in SSH for security and in a situation where passwordless communication is not set.
Do we need to give a password, even if the key is added in SSH?
Yes, password is still required even if the key is added in SSH.
What if a Namenode has no data?
If a Namenode has no data it is not a Namenode. Practically, Namenode will have some data.
 What happens to job tracker when Namenode is down?
When Namenode is down, your cluster is OFF, this is because Namenode is the single point of failure in HDFS.
What happens to a Namenode, when job tracker is down?
When a job tracker is down, it will not be functional but Namenode will be present. So, cluster is accessible if Namenode is working, even if the job tracker is not working.
 Can you give us some more details about SSH communication between Masters and the Slaves?
SSH is a password-less secure communication where data packets are sent across the slave. It has some format into which data is sent across. SSH is not only between masters and slaves but also between two hosts. 

 What is formatting of the DFS?
Just like we do for Windows, DFS is formatted for proper structuring. It is not usually done as it formats the Namenode too.
Does the HDFS client decide the input split or Namenode?
No, the Client does not decide. It is already specified in one of the configurations through which input split is already configured.
 In Cloudera there is already a cluster, but if I want to form a cluster on Ubuntu can we do it?
Yes, you can go ahead with this! There are installation steps for creating a new cluster. You can uninstall your present cluster and install the new cluster.
Can we create a Hadoop cluster from scratch?
Yes we can do that also once we are familiar with the Hadoop environment.
 Can we use Windows for Hadoop?
Actually, Red Hat Linux or Ubuntu are the best Operating Systems for Hadoop. Windows is not used frequently for installing Hadoop as there are many support problems attached with Windows. Thus, Windows is not a preferred environment for Hadoop.