Getting Data In

If I delete data from HDFS will it impact my Splunk instance?

scottgr
New Member

I'm storing log data in HDFS that is being indexed by Splunk. Due to space constrains I'd like to delete data over a certain age. I know that I can do this by editing indexes.conf but I wanted to see if there were any gotchas that I needed to be aware of.

I'm specifically interested in knowing:

  • Will Splunk correctly delete the log data from HDFS if I tell it to delete data over a certain age? i.e. is there anything specific I need to know about deletion of Splunk data from HDFS
  • If instead of deleting the data from Splunk I used a script to automatically delete the files from HDFS would it cause problems with Splunk? (for example the index is expecting to see data that is now missing). There might be some advantages to me deleting the data from HDFS directly rather than depending on Splunk to do it.

I'm quite new to working with Splunk as a developer so I'd be grateful for any advice people have with the above. Thanks.

0 Karma

mattymo
Splunk Employee
Splunk Employee

Hey Scottgr!

As far as I know Hunk/Data Roll will not action any retention policy on the HDFS side, so removing data with a script or policy on the Haddop side shouldn't bother Hunk. The virtual index simply tells Hunk where the data lives and how to send an MR job to the Hadoop side, and will return what it finds.

You should be fine to manage the lifecycle of the data in HDFS in whatever manner works for you, in fact, it is something you will be required to do.

- MattyMo
0 Karma

scottgr
New Member

That's great - really appreciate the advice. Thanks.

0 Karma

ddrillic
Ultra Champion

Right, that's the key thing - it's virtual - "just" a pointer to the location. You can administer the data on HDFS as you please...

0 Karma

mattymo
Splunk Employee
Splunk Employee

Hi scottgr!

Can you tell us more about the Splunk configuration you are using to interact with HDFS? Hadoop Data Roll? Hunk?

When I last played with data roll, Splunk didn't maintain the retention logic on the hdfs side. Once it was there, it was there and we only aged out data in your indexes. I recall it was relatively easy to use the hdfs command to clean up if needed.

- MattyMo
0 Karma

scottgr
New Member

Thanks for getting back to me - it's much appreciated. We're using Hunk. So we have a remote Hadoop cluster that we're accessing in Splunk via a virtual index.

What I'm unsure about is whether it's acceptable to just delete old data within Hadoop - or whether that will cause problems with the Splunk indexing.

Thanks again for the help.

0 Karma
Get Updates on the Splunk Community!

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...