suppress command line output hadoop fs command

2017-06-13T00:10:25

I am running a hadoop mapreduce job using a Python program that creates different input paths as parameters for the mapreduce job. I am currently checking for hadoop fs path existence, before I pass these input paths into the mapreduce, using the command:

hadoop fs -test -e 'filename'

My Python program then communicates with the command line and determines if the file exists (the -test returns 0 when the file exists, an integer greater than 1 otherwise). Since the Python program is checking for path existence and outputting all of the nonexistent paths to a separate .txt document, I do not need to know which paths do not exist as command line warnings.

I would like to know how to suppress (or ignore) the automatic hadoop fs output:

test: 'fileName': No such file or directory

as I am inputting a huge number of paths and quite a few of them do not exist in hadoop fs.

Copyright License:
Author:「matt123788」,Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.
Link to:https://stackoverflow.com/questions/44504037/suppress-command-line-output-hadoop-fs-command

About “suppress command line output hadoop fs command” questions

I am running a hadoop mapreduce job using a Python program that creates different input paths as parameters for the mapreduce job. I am currently checking for hadoop fs path existence, before I pass
When we run hadoop fs -get command we see all the info messages. How can I suppress these messages. 17/12/05 17:59:02 INFO s3n.S3NativeFileSystem: Opening 's3://testbucketzs/Manish/test1/data/csv-...
I have a bash shell script that contains the following command: hadoop fs -get ${WORKING_DIRECTORY}${OUTPUT_FILE} which resolves into hadoop fs -get /tmp/out.csv When I run the shell script, this
I'm trying to suppress output (errors mostly) in the command prompt, but still log all other output in a logfile. Here I found to suppress it in the cmd line output: Suppress command line output ...
I am running below command, tried to use either filename exists, or filename not exists, but none of them have any outputs from console. I expect if a file exists, the command should return zero? ...
I am copying huge number of files using hadoop fs -get -p command. I want to retain (timestamps, ownerships) Many of the files are not able to retain the permissions as the userid are not available...
I have constructed a single-node Hadoop environment on CentOS using the Cloudera CDH repository. When I want to copy a local file to HDFS, I used the command: sudo -u hdfs hadoop fs -put /root/My...
I am trying to use hadoop streaming where I have a java class which is used as mapper. To keep the problem simple let us assume the java code is like the following: import java.io.* ; class Test ...
I want to check if file following a pattern exists on a hdfs path. I am trying with the below command: hadoop fs -test -e /user/foo/bar/abc* But its throwing error: test: `/user/foo/bar/abc*': No
Created a folder [LOAN_DATA] with below command hadoop fs -mkdir hdfs://masterNode:8020/tmp/hadoop-hadoop/dfs/LOAN_DATA Now using the web UI when I list the contents of directory /tmp/hadoop-hadoo...

Copyright License:Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.