java.lang.ArrayIndexOutOfBoundsException in Hadoop mapreduce programme

2016-08-27T03:02:25

I am getting Array index bound of exception in Map programme. Below is the data and mapreduce programme.

Data:

1,raja,10,10000

2,jyo,10,10000

3,tej,11,20000

4,tej1,11,20000

MapReduce Programme:

    public static class EmployMap extends Mapper<LongWritable, Text, Text, IntWritable>
{
                String dNname;
                public void map(LongWritable k,Text v,Context con) throws IOException, InterruptedException{
                    String text=v.toString();
                    String[] textArry=text.split(",");
                    System.out.println(textArry.length);
                    int dNo=Integer.parseInt(textArry[2]);
                    int sal=Integer.parseInt(textArry[3]);
                    if(dNo==10){
                        dNname="Automation";
                    }else{
                        dNname="Manual";
                    }
                    con.write(new Text(dNname), new IntWritable(sal));
                }
            }

            public static class EmployReduce extends Reducer<Text, IntWritable, Text, IntWritable>{
                int totalSal;
                public void reduce(Text k, Iterable<IntWritable> v,Context con) throws IOException, InterruptedException{
                    for(IntWritable val:v){
                        totalSal+=val.get();
                    }
                    con.write(k, new IntWritable(totalSal));
                }
            }

            public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
                Configuration conf=new Configuration();
                Path input=new Path(args[0]);
                Path output=new Path(args[1]);
                Job job=Job.getInstance(conf);
                job.setJarByClass(Employ.class);
                job.setMapperClass(EmployMap.class);
                job.setReducerClass(EmployReduce.class);
                job.setOutputKeyClass(Text.class);
                job.setOutputValueClass(IntWritable.class);
                FileInputFormat.addInputPath(job, input);
                FileOutputFormat.setOutputPath(job, output);
                System.exit(job.waitForCompletion(true) ? 0:1);
            }

        }

Error Logs

Error: java.lang.ArrayIndexOutOfBoundsException: 2
    at Employ$EmployMap.map(Employ.java:21)
    at Employ$EmployMap.map(Employ.java:1)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Error is at line 21 i.e. at "int dNo=Integer.parseInt(textArry[2]);" can some one help me to understand whats wrong with the code?

Copyright License:
Author:「Rajashekar」,Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.
Link to:https://stackoverflow.com/questions/39173122/java-lang-arrayindexoutofboundsexception-in-hadoop-mapreduce-programme

About “java.lang.ArrayIndexOutOfBoundsException in Hadoop mapreduce programme” questions

I am getting Array index bound of exception in Map programme. Below is the data and mapreduce programme. Data: 1,raja,10,10000 2,jyo,10,10000 3,tej,11,20000 4,tej1,11,20000 MapReduce Programm...
I was trying to solve this question using hadoop. Find the top ten rated businesses using the average ratings. Top rated business will come first. Recall that 4th column in review.csv file represe...
I have a text file like with tab delimiter 20001204X00000 Accident 10 9 6 Hyd 20001204X00001 Accident 8 7 vzg 2 20001204X00002 Accident 10 7 sec 1 20001204X00003
I try to run a mapreduce in java, but get this error. Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 0 at com.mapreduce.WordCount.run(WordCount.java:23) at org.
I am using Hadoop MapReduce to calculate each year's min and max value, but when I run the program, I get the error: FAILED Error: java.lang.ArrayIndexOutOfBoundsException: 5 I think this is because
I am following this hadoop mapreduce tutorial given by Apache. The Java code given there uses these Apache-hadoop classes: import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.f...
The input is a list of house data where each input record contains information about a single house: (address,city,state,zip,value). The five items in a record is delimited by the sign comma (,). The
I've some problems making a map reduce job for process a cdv file. The problem is with the map process but I'm not sure. I'm doing.. public void map(Object key, Text value, Context context) throws
I am currently trying to parse the hour from some twitter data. The timestamp in twitter is in epochseconds, and I am using a mapreduce programme to extract and sum the hour of the day that each tw...
I wanted to run a MapReduce-Job on my FreeBSD-Cluster with two nodes but I get the following Exception 14/08/27 14:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your

Copyright License:Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.