Skip to main content

Executing MapReduce Applications on Hadoop (Single-node Cluster) - Part 1

Okay. You just set up Hadoop on a single node on a VM and now wondering what comes next. Of course, you’ll run something on it, and what could be better than your own piece of code? But before we move to that, let’s first try to run an existing program to make sure things are well set on our Hadoop cluster.

Power up your Ubuntu with Hadoop on it and on Terminal (Ctrl+Alt+T) run the following command:
$ start-all.sh

Provide the password whenever asked and when all the jobs have started, execute the following command to make sure all the jobs are running:
$ jps

Note: The “jps” utility is available only in Oracle JDK, not Open JDK. See, there are reasons it was recommended in the first place.

You should be able to see the following services:
NameNode
SecondaryNameNode
DataNode
JobTracker
TaskTracker
Jps



We'll take a minute to very briefly define these services first.

NameNode: a component of HDFS (Hadoop File System) that manages all the file system metadata, links, trees, directory structure, etc. You can track the status of NameNode on http://localhost:50070 in the browser of your machine (localhost can be some other address if you are not using standalone deployment).

SecondaryNameNode: no. This is not a backup, or replica of NameNode. The primary responsibility of SecondaryNameNode is maintaining the logs created by NameNode, since the size of the logs can become huge.

DataNode: this one handles the actual data. In a multi-node cluster, you may have more DataNodes. You make changes to the DataNode via NameNode.

JobTracker: this service relates to MapReduce jobs. It keeps the jobs given to Hadoop from client for processing. It talks to NameNode to find relevant data, then looks for most appropriate nodes in the cluster to assign tasks to. Additionally, it reassigns tasks when a node fails to do it. The status of JobTracker can be viewed on http://localhost:50030

TaskTracker: a node in the cluster that does the Map, Reduce and Shuffle operations assigned to it by JobTracker. It keeps sending Heartbeat messages to JobTracker to inform about its status.

Enough theory, let’s get back to real stuff.

We will begin with example of Word Count, provided with Hadoop and see how it goes. This utility does nothing fancy; it counts the number of occurrence of each word in a bunch of text files. Here are the steps to do so:
  1. Fetch some plain text files (novels recommended), create a directory “books” in your Documents and copy these files in it. I’ll be using some novels of Sherlock Holmes I downloaded from http://www.readsherlock.com, but you can use any text files.


  2. Copy these files into HDFS using dfs utility:$ hadoop dfs –copyFromLocal $HOME/Documents/books /HDFS/books

  3. Confirm that these files have been copied using ls command
    $ hadoop dfs –ls /HDFS/books
  4. Finally, execute the example jar file given in Hadoop examples:
    $ hadoop jar $HADOOP_HOME/hadoop-examples-1.2.1.jar wordcount /HDFS/books /HDFS/books/output


  5. The MapReduce Job “wordcount” in the hadoop-examples jar will execute, pick the text files from /HDFS/books, count the occurrence of each unique word and write the output to /HDFS/books/output. You should also check the following on your web browser to trace the Job’s statuses:
    http://localhost:50070
    http://localhost:50060
    http://localhost:50030
  6. In order to collect the output file, run the following command:
    $ hadoop dfs –getmerge /HDFS/books/output $HOME/Documents/books/
The output file should now be in your Documents/books directory in readable form.


Comments

  1. this works !!! thanks

    ReplyDelete
  2. hadoop dfs –copyFromLocal $HOME/Documents/books /HDFS/books here you have done HDFS/books how is it possible without making the sirectory in HDFS i am getting an error copyFromLocal: `/HDFS/gutenberg': No such file or directory

    ReplyDelete
    Replies
    1. Then create the directory on hdfs: hadoop dfs -mkdir /HDFS/dir_name and try copyFromLocal again :)

      ~SSL

      Delete

Post a Comment

Popular posts from this blog

A faster, Non-recursive Algorithm to compute all Combinations of a String

Imagine you're me, and you studied Permutations and Combinations in your high school maths and after so many years, you happen to know that to solve a certain problem, you need to apply Combinations. You do your revision and confidently open your favourite IDE to code; after typing some usual lines, you pause and think, then you do the next best thing - search on Internet. You find out a nice recursive solution, which does the job well. Like the following: import java.util.ArrayList; import java.util.Date; public class Combination {    public ArrayList<ArrayList<String>> compute (ArrayList<String> restOfVals) {       if (restOfVals.size () < 2) {          ArrayList<ArrayList<String>> c = new ArrayList<ArrayList<String>> ();          c.add (restOfVals);          return c;       }       else {          ArrayList<ArrayList<String>> newList = new ArrayList<ArrayList<String>> ();          for (String

How to detach from Facebook... properly

Yesterday, I deactivated my Facebook account after using it for 10 years. Of course there had to be a very solid reason; there was, indeed... their privacy policy . If you go through this page, you might consider pulling off as well. Anyways, that's not what this blog post is about. What I learned from yesterday is that the so-called "deactivate" option on Facebook is nothing more than logging out. You can log in again without any additional step and resume from where you last left. Since I really wanted to remove myself from Facebook as much as I can, I investigated ways to actually delete a Facebook account. There's a plethora of blogs on the internet, which will tell you how you can simply remove Facebook account. But almost all of them will either tell you to use "deactivate" and "request delete" options. The problem with that is that Facebook still has a last reusable copy of your data. If you really want to be as safe from its s

A step-by-step guide to query data on Hadoop using Hive

Hadoop empowers us to solve problems that require intense processing and storage on commodity hardware harnessing the power of distributed computing, while ensuring reliability. When it comes to applicability beyond experimental purposes, the industry welcomes Hadoop with warm heart, as it can query their databases in realistic time regardless of the volume of data. In this post, we will try to run some experiments to see how this can be done. Before you start, make sure you have set up a Hadoop cluster . We will use Hive , a data warehouse to query large data sets and a adequate-sized sample data set, along with an imaginary database of a travelling agency on MySQL; the DB  consisting of details about their clients, including Flight bookings, details of bookings and hotel reservations. Their data model is as below: The number of records in the database tables are as: - booking: 2.1M - booking_detail: 2.1M - booking_hotel: 1.48M - city: 2.2K We will write a query that