Dataingestion with Flume & Hadoop doesn't work

2013-11-15T17:13:37

I'm using Flume 1.4.0 and Hadoop 2.2.0. When I'm starting Flume and writing to HDFS I get following Exception:

(SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:460)] process failed
java.lang.VerifyError: class org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$RenewLeaseRequestProto overrides final method getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet;
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:791)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
        at java.lang.Class.getDeclaredMethods0(Native Method)
        at java.lang.Class.privateGetDeclaredMethods(Class.java:2442)
        at java.lang.Class.privateGetPublicMethods(Class.java:2562)
        at java.lang.Class.privateGetPublicMethods(Class.java:2572)
        at java.lang.Class.getMethods(Class.java:1427)
        at sun.misc.ProxyGenerator.generateClassFile(ProxyGenerator.java:426)
        at sun.misc.ProxyGenerator.generateProxyClass(ProxyGenerator.java:323)
        at java.lang.reflect.Proxy.getProxyClass(Proxy.java:521)
        at java.lang.reflect.Proxy.newProxyInstance(Proxy.java:601)
        at org.apache.hadoop.ipc.ProtobufRpcEngine.getProxy(ProtobufRpcEngine.java:92)
        at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:537)
        at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:328)
        at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:235)
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
        at org.apache.flume.sink.hdfs.BucketWriter.doOpen(BucketWriter.java:207)
        at org.apache.flume.sink.hdfs.BucketWriter.access$000(BucketWriter.java:53)
        at org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:172)
        at org.apache.flume.sink.hdfs.BucketWriter$1.run(BucketWriter.java:170)
        at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:143)
        at org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:170)
        at org.apache.flume.sink.hdfs.BucketWriter.append(BucketWriter.java:364)
        at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:729)
        at org.apache.flume.sink.hdfs.HDFSEventSink$2.call(HDFSEventSink.java:727)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
        at java.util.concurrent.FutureTask.run(FutureTask.java:166)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)

The part of my hdfs-sink in the flume.conf is looking like this:

Define a sink that outputs to hdfs

agent.sinks.hdfs-sink.channel = memory-channel
agent.sinks.hdfs-sink.type = hdfs
agent.sinks.hdfs-sink.hdfs.path = hdfs://localhost:8020/flume
agent.sinks.hdfs-sink.hdfs.fileType = DataStream
agent.sinks.hdfs-sink.hdfs.writeFormat = Text
agent.sinks.hdfs-sink.hdfs.rollCount = 10
agent.sinks.hdfs-sink.hdfs.batchSize = 10
agent.sinks.hdfs-sink.hdfs.rollSize = 0

I hope anyone can help me.

Copyright License:
Author:「user2991304」,Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.
Link to:https://stackoverflow.com/questions/19997375/dataingestion-with-flume-hadoop-doesnt-work

About “Dataingestion with Flume & Hadoop doesn't work” questions

I'm using Flume 1.4.0 and Hadoop 2.2.0. When I'm starting Flume and writing to HDFS I get following Exception: (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.
From Apache Flume 1.6 Official website , I find flume is distributed. But Master-slave architecture has been deprecated after Flume 1.x. How does flume distribute the work? I have flume installed o...
I am trying to configure flume and am following this link. The following command works for me: flume-ng agent -n TwitterAgent -c conf -f /usr/lib/apache-flume-1.7.0-bin/conf/flume.conf The resul...
I am using Flume 1.6 and have a custom sink implementation. I have built a JAR file with all necessary dependencies and placed it under &lt;FLUME_DIR&gt;/plugins.d/MySink/lib/MySink.jar As far as ...
I am just starting with FLUME. Many installation guides mentioned like single node installation of FLUME… Is there a distributed flavor? How do we install that? What are collectors, i saw them in few
I've a flume memory channel and I want to know if exists a way to be sure that stopping a flume agent will not cause data loss on the channel. A possible solution could be to stop the source, attend
I have created a custom source for flume and copied the jar files in the following locations : mkdir -p /usr/lib/flume-ng/plugins.d/MyFlumeSource/lib/MyFlumeSource.jar chown -R flume:flume /var...
I am working with flume to ingest a ton of data into hdfs (about petabytes of data). I would like to know how is flume making use of its distributed architecture? I have over 200 servers and I have
I am trying to read a log file from /home/cloudera/Documents/flume/ and write it to hdfs using apache flume . I used the following command to create flumeLogTest folder in hdfs : sudo -u hdfs hado...
I am new with Apache Flume. I understand that Apache Flume can help transport data. But I still fail to see the ultimate benefit offered by Apache Flume. If I can configure a software or make a so...

Copyright License:Reproduced under the CC 4.0 BY-SA copyright license with link to original source & disclaimer.