Hadoop Job not running in big dataset throwing Child Error

2015-08-14T03:59:02

I am running a Map-Reduce job over an application that runs on top of Hadoop. It runs ok for smaller datasets, but increasing the data size causes it to fail with a message like the one below.

I tried with various configurations of memory in mapred.child.*.java.opts but without success. The process runs till 6% or 7% and then fails. If the data size is reduced it will run for a higher percentage value and then fail. I can see that this particular process is assigned to only one mapper.

java.lang.Throwable: Child Error at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:250) Caused by: java.io.IOException: Task process exit with nonzero status of 137. at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:237)

Copyright License:
Author:「Rohan Mukherjee」,Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.
Link to:https://stackoverflow.com/questions/31997331/hadoop-job-not-running-in-big-dataset-throwing-child-error

About “Hadoop Job not running in big dataset throwing Child Error” questions

I am running a Map-Reduce job over an application that runs on top of Hadoop. It runs ok for smaller datasets, but increasing the data size causes it to fail with a message like the one below. I ...
I'm trying to run a Hadoop program over a big text dataset (~3.1Tb). I'm obtaining this error all the time and I cannot see any log: 15/04/29 13:31:30 INFO mapreduce.Job: map 86% reduce 3% 15/0...
I'm trying to run hadoop job on local/remote cluster. This job in future will be executed from web application. I'm trying to execute this piece of code from eclipse: public class TestHadoop {
I am trying to run a MapReduce job from outside the cluster. e.g. Hadoop cluster is running on Linux machines. We have one web application running on a Windows machine. We want to run the
Several stackoverflow entries have addressed this question but none quite seem to nail it. I want logic whereby even if one task on one node fails, I kill the entire job before it finishes. A good
It is the first time I'm running a job on hadoop and started from WordCount example. To run my job, I', using this command hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar hadoop*examples*.jar word...
My Java application runs on mapper and creates child processes using Qubole API. Application stores child qubole queryIDs. I need to intercept kill signal and shutdown child processes before exit.
I'm trying to run the Hadoop grep example in a pseudo-distributed configuration using Hadoop 0.22.0 on Windows 7 with Cygwin. The example works fine in standalone mode, but when run in pseudo-distr...
From this guide, I have successfully run the sample exercise. But on running my mapreduce job, I am getting the following error ERROR streaming.StreamJob: Job not Successful! 10/12/16 17:13:38 INFO
When i run my job on a larger dataset, lots of mappers / reducers fail causing the whole job to crash. Here's the error i see on many mappers: java.io.FileNotFoundException: File does not exist: /...

Copyright License:Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.