I'm storing log data in HDFS that is being indexed by Splunk. Due to space constrains I'd like to delete data over a certain age. I know that I can do this by editing indexes.conf but I wanted to see if there were any gotchas that I needed to be aware of.
I'm specifically interested in knowing:
I'm quite new to working with Splunk as a developer so I'd be grateful for any advice people have with the above. Thanks.
Hey Scottgr!
As far as I know Hunk/Data Roll will not action any retention policy on the HDFS side, so removing data with a script or policy on the Haddop side shouldn't bother Hunk. The virtual index simply tells Hunk where the data lives and how to send an MR job to the Hadoop side, and will return what it finds.
You should be fine to manage the lifecycle of the data in HDFS in whatever manner works for you, in fact, it is something you will be required to do.
That's great - really appreciate the advice. Thanks.
Right, that's the key thing - it's virtual - "just" a pointer to the location. You can administer the data on HDFS as you please...
Hi scottgr!
Can you tell us more about the Splunk configuration you are using to interact with HDFS? Hadoop Data Roll? Hunk?
When I last played with data roll, Splunk didn't maintain the retention logic on the hdfs side. Once it was there, it was there and we only aged out data in your indexes. I recall it was relatively easy to use the hdfs command to clean up if needed.
Thanks for getting back to me - it's much appreciated. We're using Hunk. So we have a remote Hadoop cluster that we're accessing in Splunk via a virtual index.
What I'm unsure about is whether it's acceptable to just delete old data within Hadoop - or whether that will cause problems with the Splunk indexing.
Thanks again for the help.