Is it best practice to move Hadoop logs to HDFS when they rotate to allow them to be visible through Hunk?

Path Finder

Hadoop generates lots of logs. It struck me recently that when the logs rotate, I might just move them to HDFS and allow them to be visible through Hunk.

Is this what many people do?

I guess I should change the log4j config so that rotated log files all have the date in their name rather than just a ".<digit>" suffix.

This might be bad if the hadoop cluster goes down - but I hope that if that happens, then the current log files will be enough.

Is this "best practice"?

Tags (3)
0 Karma

Splunk Employee
Splunk Employee

So far I am aware of only one other customer who is using Hunk to monitor Hadoop.

Customer use case =
In addition, here is a good blog on the subject:

0 Karma