Question

I am trying to integrate FLUME with HDFS and my FLUME config file is

hdfs-agent.sources= netcat-collect
hdfs-agent.sinks = hdfs-write
hdfs-agent.channels= memoryChannel

hdfs-agent.sources.netcat-collect.type = netcat
hdfs-agent.sources.netcat-collect.bind = localhost
hdfs-agent.sources.netcat-collect.port = 11111

hdfs-agent.sinks.hdfs-write.type = FILE_ROLL
hdfs-agent.sinks.hdfs-write.hdfs.path = hdfs://127.0.0.1:50020/user/oracle/flume
hdfs-agent.sinks.hdfs-write.rollInterval = 30
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat=Text
hdfs-agent.sinks.hdfs-write.hdfs.fileType=DataStream

hdfs-agent.channels.memoryChannel.type = memory
hdfs-agent.channels.memoryChannel.capacity=10000
hdfs-agent.sources.netcat-collect.channels=memoryChannel
hdfs-agent.sinks.hdfs-write.channel=memoryChannel.

And my core site file is

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost</value>
    </property>
</configuration>

When i try to run the flume agent , it is starting and it is able to read from the nc command but while writing to the hdfs i am getting the below exception. I have tried to start in safe mode using hadoop dfsadmin -safemode leave still i have the same below exception.

2014-02-14 10:31:53,785 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.hdfs.BucketWriter.open(BucketWriter.java:219)] Creating hdfs://127.0.0.1:50020/user/oracle/flume/FlumeData.1392354113707.tmp
2014-02-14 10:31:54,011 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:418)] HDFS IO error
java.io.IOException: Call to /127.0.0.1:50020 failed on local exception: java.io.EOFException
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1089)
        at org.apache.hadoop.ipc.Client.call(Client.java:1057)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
        at $Proxy5.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:369)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:213)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:180)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1489)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1523)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1505)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:227)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
        at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:226)
        at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:220)
        at org.apache.flume.sink.hdfs.BucketWriter$8$1.run(BucketWriter.java:536)
        at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:160)
        at org.apache.flume.sink.hdfs.BucketWriter.access$1000(BucketWriter.java:56)
        at org.apache.flume.sink.hdfs.BucketWriter$8.call(BucketWriter.java:533)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:375)
        at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:781)
        at org.apache.hadoop.ipc.Client$Connection.run(Client.java:689)

Please let me know if have configured something wrong in any of the properties files so that it will work.

Also please let me know if i am using the correct port for this

my target is to integrate flume and hadoop. i have a single node server setup for hadoop

Was it helpful?

Solution

You must provide a port number with fs.default.name

Example :

<configuration>
    <property>
        <name>fs.default.name</name>
        <value>hdfs://localhost:9001</value>
    </property>
</configuration>

After that edit the Flume config file as below

hdfs-agent.sources= netcat-collect
hdfs-agent.sinks = hdfs-write
hdfs-agent.channels= memoryChannel

hdfs-agent.sources.netcat-collect.type = netcat
hdfs-agent.sources.netcat-collect.bind = localhost
hdfs-agent.sources.netcat-collect.port = 11111

hdfs-agent.sinks.hdfs-write.type = hdfs
hdfs-agent.sinks.hdfs-write.hdfs.path = hdfs://127.0.0.1:9001/user/oracle/flume
hdfs-agent.sinks.hdfs-write.rollInterval = 30
hdfs-agent.sinks.hdfs-write.hdfs.writeFormat=Text
hdfs-agent.sinks.hdfs-write.hdfs.fileType=DataStream

hdfs-agent.channels.memoryChannel.type = memory
hdfs-agent.channels.memoryChannel.capacity=10000
hdfs-agent.sources.netcat-collect.channels=memoryChannel
hdfs-agent.sinks.hdfs-write.channel=memoryChannel

Changes :

hdfs-agent.sinks.hdfs-write.type = hdfs(sink type as hdfs)
hdfs-agent.sinks.hdfs-write.hdfs.path = hdfs://127.0.0.1:9001/user/oracle/flume(port number) 
hdfs-agent.sinks.hdfs-write.channel=memoryChannel(Removed the dot symbol after memoryChannel)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top