Hadoop mapreduce task failing with 143


I am currently learning to use Hadoop mapred an have come across this error:

packageJobJar: [/home/hduser/mapper.py, /home/hduser/reducer.py, /tmp/hadoop-unjar4635332780289131423/] [] /tmp/streamjob8641038855230304864.jar tmpDir=null
16/10/31 17:41:12 INFO client.RMProxy: Connecting to ResourceManager at /
16/10/31 17:41:13 INFO client.RMProxy: Connecting to ResourceManager at /
16/10/31 17:41:15 INFO mapred.FileInputFormat: Total input paths to process : 1
16/10/31 17:41:17 INFO mapreduce.JobSubmitter: number of splits:2
16/10/31 17:41:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1477933345919_0004
16/10/31 17:41:19 INFO impl.YarnClientImpl: Submitted application application_1477933345919_0004
16/10/31 17:41:19 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1477933345919_0004/
16/10/31 17:41:19 INFO mapreduce.Job: Running job: job_1477933345919_0004
16/10/31 17:41:38 INFO mapreduce.Job: Job job_1477933345919_0004 running in uber mode : false
16/10/31 17:41:38 INFO mapreduce.Job:  map 0% reduce 0%
16/10/31 17:41:56 INFO mapreduce.Job:  map 100% reduce 0%
16/10/31 17:42:19 INFO mapreduce.Job: Task Id : attempt_1477933345919_0004_r_000000_0, Status : FAILED
Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 1
    at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322)
    at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535)
    at org.apache.hadoop.streaming.PipeReducer.close(PipeReducer.java:134)
    at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
    at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
    at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

Am unable to work out how to fix this error and ahve been searching over the internet. The code I am using for my mapper is:

import sys

for line in sys.stdin:
    line = line.strip()
    words = line.split()

    for word in words:
        print '%s\t%s' % (word, 1)

The code for the reducer is:

from operator import itemgetter
import sys

current_word = None
current_count = 0
word = None

for line in sys.stdin:
    line = line.strip()
    word, count = line.split('\t', 1)

        count = int(count)
    except ValueError:

    if current_word == word:
        current_count += count
        if current_word:
            print '%s\t%s' % (current_word, current_count)
        current_count = count
        current_word = word

if current_word == word:
    print '%s\t%s' % (current_word, current_count)

In order to run the task I am using :

hduser@master:/opt/hadoop-2.7.3/share/hadoop/tools/lib $ hadoop jar hadoop-streaming-2.7.3.jar -file /home/hduser/mapper.py -mapper "python mapper.py" -file /home/hduser/reducer.py -reducer "python reducer.py" -input ~/testDocument -output ~/results1

Any help would be appreciated as I am new to Hadoop. If any more logs or information are required please don't hesitate to ask.

Copyright License:
Author:「hudsond7」,Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.
Link to:https://stackoverflow.com/questions/40347638/hadoop-mapreduce-task-failing-with-143

About “Hadoop mapreduce task failing with 143” questions

I am currently learning to use Hadoop mapred an have come across this error: packageJobJar: [/home/hduser/mapper.py, /home/hduser/reducer.py, /tmp/hadoop-unjar4635332780289131423/] [] /tmp/
Team, Please be informed that i am running Hadoop mapreduce examples (Version 2.7.1). It is failing with below error. Exit code: 1 Exception message: CreateSymbolicLink error (1314): A required
I am newbie in the world of hadoop mapreduce framework. I read a lot of tutorials myself and understood the framework. I have successfully configured a hadoop setup in pseudo distributed mode. I ha...
I am relatively new to hadoop 2 (hadoop 2.2.0) and I don't understand why M/R job ~ application on Resource manager is marked as failed : application_1399458460502_0015 pig Max temperature MAPRED...
MapReduce job is failing with following error even though JAVA_HOME is set. /bin/bash: /bin/java: No such file or directory I am trying to setup hadoop (3.3.4) on my Mac M1. I have set JAVA_HOME i...
I am trying Hadoop map-reduce in a two-node Linux cluster (Ubuntu Virtual Machine) by following this tutorial. When I run the wordcount map reduce program, the task is not being run on the slave. ...
I am trying to execute a map reduce program on Hadoop. When i submit my job to the hadoop single node cluster. The job is getting created but failing with the message "Container killed by the
The Map-reduce job is failing with the following error on the reducer Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#5 at org.apache.hadoop.mapred...
I want to write a 3rd party frontend to hadoop mapreduce which needs to query mapreduce on some information and statistics. Right now I'm able to use hadoop job to query jobs and the map and reduce
I have a relatively simple program written in C++ and I have been using Hadoop Streaming for MapReduce jobs (my version of Hadoop is Cloudera). Recently, I found that a lot of streaming tasks are ...

Copyright License:Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.