All Apps and Add-ons

Is it best practice to move Hadoop logs to HDFS when they rotate to allow them to be visible through Hunk?

alexmc
Explorer

Hadoop generates lots of logs. It struck me recently that when the logs rotate, I might just move them to HDFS and allow them to be visible through Hunk.

Is this what many people do?

I guess I should change the log4j config so that rotated log files all have the date in their name rather than just a ".<digit>" suffix.

This might be bad if the hadoop cluster goes down - but I hope that if that happens, then the current log files will be enough.

Is this "best practice"?

Tags (3)
0 Karma

rdagan_splunk
Splunk Employee
Splunk Employee

So far I am aware of only one other customer who is using Hunk to monitor Hadoop.

Customer use case = http://www.slideshare.net/Hadoop_Summit/enabling-exploratory-analytics-of-data-in-sharedservice-hado...
In addition, here is a good blog on the subject: http://blogs.splunk.com/2014/05/14/hunkonhunk/

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...