All Apps and Add-ons

Is it best practice to move Hadoop logs to HDFS when they rotate to allow them to be visible through Hunk?

alexmc
Explorer

Hadoop generates lots of logs. It struck me recently that when the logs rotate, I might just move them to HDFS and allow them to be visible through Hunk.

Is this what many people do?

I guess I should change the log4j config so that rotated log files all have the date in their name rather than just a ".<digit>" suffix.

This might be bad if the hadoop cluster goes down - but I hope that if that happens, then the current log files will be enough.

Is this "best practice"?

Tags (3)
0 Karma

rdagan_splunk
Splunk Employee
Splunk Employee

So far I am aware of only one other customer who is using Hunk to monitor Hadoop.

Customer use case = http://www.slideshare.net/Hadoop_Summit/enabling-exploratory-analytics-of-data-in-sharedservice-hado...
In addition, here is a good blog on the subject: http://blogs.splunk.com/2014/05/14/hunkonhunk/

0 Karma
Get Updates on the Splunk Community!

Developer Spotlight with Paul Stout

Welcome to our very first developer spotlight release series where we'll feature some awesome Splunk ...

State of Splunk Careers 2024: Maximizing Career Outcomes and the Continued Value of ...

For the past four years, Splunk has partnered with Enterprise Strategy Group to conduct a survey that gauges ...

Data-Driven Success: Splunk & Financial Services

Splunk streamlines the process of extracting insights from large volumes of data. In this fast-paced world, ...