All Apps and Add-ons

Is it best practice to move Hadoop logs to HDFS when they rotate to allow them to be visible through Hunk?

alexmc
Explorer

Hadoop generates lots of logs. It struck me recently that when the logs rotate, I might just move them to HDFS and allow them to be visible through Hunk.

Is this what many people do?

I guess I should change the log4j config so that rotated log files all have the date in their name rather than just a ".<digit>" suffix.

This might be bad if the hadoop cluster goes down - but I hope that if that happens, then the current log files will be enough.

Is this "best practice"?

Tags (3)
0 Karma

rdagan_splunk
Splunk Employee
Splunk Employee

So far I am aware of only one other customer who is using Hunk to monitor Hadoop.

Customer use case = http://www.slideshare.net/Hadoop_Summit/enabling-exploratory-analytics-of-data-in-sharedservice-hado...
In addition, here is a good blog on the subject: http://blogs.splunk.com/2014/05/14/hunkonhunk/

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...