can I run Hadoop onflow (run map reduce when application is running)

2013-10-25T14:53:56

Can we generate output using hadoop on flow? I have big file which consist of logs and having appointment id's. If I use traditional RDBMS I can get appointment id's but it take 1 or 2 hrs.

Log file size is 800 GB

On flow means show this appointment id when admin logs into system. can I run Hadoop onflow (run map reduce when application is running)

Copyright License:
Author:「user2826111」,Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.
Link to:https://stackoverflow.com/questions/19583017/can-i-run-hadoop-onflow-run-map-reduce-when-application-is-running

About “can I run Hadoop onflow (run map reduce when application is running)” questions

Can we generate output using hadoop on flow? I have big file which consist of logs and having appointment id's. If I use traditional RDBMS I can get appointment id's but it take 1 or 2 hrs. Log file
I installed Hadoop on two nodes ( Master and Slave nodes). I would ask if i can run Map/Reduce job from the Slave Machine or use the HDFS from the Slave machine. There is no problem by running the ...
I am trying to run a Hadoop Map Reduce example on my machine and on running the job, I see the following message in my terminal. I have no idea what it means. O/P hadoop@anuvrattiku-Inspiron-13-7...
When I try to run map/reduce job on Hadoop cluster without specifying any input file I get following exception: java.io.IOException: No input paths specified in job Well, I can imagine cases when
I am newbie on Hadoop. I remember I learned from somewhere that in Hadoop, all map functions have to be completed before reduce functions can start off. But I just got the printout when I run a map
For Hadoop Map Reduce program when we run it by executing this command $hadoop jar my.jar DriverClass input1.txt hdfsDirectory. How to make Map Reduce process multiple files( input1.txt & input...
I am new with Hadoop and I am trying to run a word count job on a single node cluster that I recently installed on my desktop. I am following the tutorial below: http://javabeginnerstutorial.com/...
I am a beginner to hadoop & when I am running a hadoop job I noticed the progress log which shows map 80% reduce 25%. My understanding of map reduce is that mappers produce bunch of intermediate
I am running map-reduce job locally with apache hadoop-2.6.0, and getting this error: 15/04/02 13:00:26 INFO mapreduce.Job: map 0% reduce 0% 15/04/02 13:00:26 INFO mapreduce.Job: Job
I have read a lot about Hadoop and Map-Reduce running on clusters of machines. Does some one know if the Apache distribution can be run on an SMP with several cores. In particular, can multiple Map-

Copyright License:Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.