Question

I'm new to hadoop.

I have installed my hbase setup using Cloudera (normal installation) on 5 servers. I created some table and filled some data.

Now I monitored the hdfs folder sizes. I can see that the data node is keeping giving consistent reading. But my namenode and journal node disk usage in increasing each time I check.

Though I have a small amount of data, the journal node and name nodes are increasing by 30MB(approx) / day.

Am I missing something in the configurations?

Was it helpful?

Solution

And after some research i found out the issue, why the edit logs are not cleared. Its a setting in hadoop.

*dfs.namenode.num.extra.edits.retained = 1000000.

This is the default value set . Reference

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top