Previously, we saw how to execute built-in example of Word Count on Hadoop, in this part, we will try to build the same application on Eclipse from the source code of word count and run it.
First, you need to install Eclipse on your Hadoop-ready Virtual Machine (assuming that JDK is already installed when you set up Hadoop). This can be done by installing from Ubuntu software center, but my recommendation is that you download it and extract to your Home directory. Any version of Eclipse should work, I have done the experiments on version 4.3 (Kepler).
After installation, launch Eclipse and the first thing to do is to make Oracle JDK your default Java Runtime:
- Go to Window > Preferences > Java > Installed JREs
- If the default JRE does not point to Oracle JRE, then edit and set the directory to /usr/lib/jvm/java-7-oracle/
- Press OK to finish
Now we will create a Java Application Project:
- Go to New > Java Project
- Name the project Combinatorics, since we will be doing some counting problems in this project
- No need to change anything else. Press Finish
- A Java project named Combinatorics should appear in your Package Explorer window on the Left
We will need some external libraries in order to build Hadoop's code. Download these libraries:
I have put these libraries in a zipped file here as well.
After you have collected all the libraries:
- Right click on the project > New > Folder
- Name the folder lib and finish
- Copy all jar files in the newly created folder (you can do so in Nautilus as well as in Eclipse)
- Right click on lib folder and click Refresh
- The jars you have added should now appear here
-Go to Project > Properties > Java Build Path > Add Jars > Combinatorics> lib. Select all jar files
- Go to Project and check Build Automatically
Next, we need to create a Source file in src folder. Right click on src folder > New > Class. Name it WordCount and Finish.
Add the following methods to the newly created class:
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.*;
public class WordCount {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable (1);
private Text word = new Text ();
public void map (LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString ();
StringTokenizer tokenizer = new StringTokenizer (line);
while (tokenizer.hasMoreTokens ()) {
word.set (tokenizer.nextToken ());
context.write (word, one);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce (Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get ();
}
context.write (key, new IntWritable (sum));
}
}
public static void main (String[] args) throws Exception {
Configuration conf = new Configuration ();
Job job = new Job (conf, "wordcount");
job.setOutputKeyClass (Text.class);
job.setOutputValueClass (IntWritable.class);
job.setMapperClass (Map.class);
job.setReducerClass (Reduce.class);
job.setInputFormatClass (TextInputFormat.class);
job.setOutputFormatClass (TextOutputFormat.class);
FileInputFormat.addInputPath (job, new Path (args[0]));
FileOutputFormat.setOutputPath (job, new Path (args[1]));
job.waitForCompletion (true);
}
}
This is the simplest for of a MapReduce program. We will have an in-depth look at the code later; first, we need to run this.
- Go to Run > Run to execute the program
- The program should, at first end on an ArrayIndexOutOfBounds Exception
- Go to Run > Run Configurations > Arguments and add the following argument:
/home/hadoop/Documents/books/ /home/hadoop/Documents/books/output (assuming that you followed part 1 and the text files are still in this path)
- Before you press Run, are all the Hadoop services running? You have to start them. Remember! Here is the command:
$ start-all.sh
- Now press Run
Watch the same progress log on the output window that you previously saw on Terminal. Your output should be in the /home/hadoop/Documents/books/output directory.
Next, we will try to understand the code and maybe change it to try something else.
Please feel free to comment for corrections, cricitcs, help, etc.
Please feel free to comment for corrections, cricitcs, help, etc.
Comments
Post a Comment