Getting Data In

If I delete data from HDFS will it impact my Splunk instance?

scottgr
New Member

I'm storing log data in HDFS that is being indexed by Splunk. Due to space constrains I'd like to delete data over a certain age. I know that I can do this by editing indexes.conf but I wanted to see if there were any gotchas that I needed to be aware of.

I'm specifically interested in knowing:

  • Will Splunk correctly delete the log data from HDFS if I tell it to delete data over a certain age? i.e. is there anything specific I need to know about deletion of Splunk data from HDFS
  • If instead of deleting the data from Splunk I used a script to automatically delete the files from HDFS would it cause problems with Splunk? (for example the index is expecting to see data that is now missing). There might be some advantages to me deleting the data from HDFS directly rather than depending on Splunk to do it.

I'm quite new to working with Splunk as a developer so I'd be grateful for any advice people have with the above. Thanks.

0 Karma

mattymo
Splunk Employee
Splunk Employee

Hey Scottgr!

As far as I know Hunk/Data Roll will not action any retention policy on the HDFS side, so removing data with a script or policy on the Haddop side shouldn't bother Hunk. The virtual index simply tells Hunk where the data lives and how to send an MR job to the Hadoop side, and will return what it finds.

You should be fine to manage the lifecycle of the data in HDFS in whatever manner works for you, in fact, it is something you will be required to do.

- MattyMo
0 Karma

scottgr
New Member

That's great - really appreciate the advice. Thanks.

0 Karma

ddrillic
Ultra Champion

Right, that's the key thing - it's virtual - "just" a pointer to the location. You can administer the data on HDFS as you please...

0 Karma

mattymo
Splunk Employee
Splunk Employee

Hi scottgr!

Can you tell us more about the Splunk configuration you are using to interact with HDFS? Hadoop Data Roll? Hunk?

When I last played with data roll, Splunk didn't maintain the retention logic on the hdfs side. Once it was there, it was there and we only aged out data in your indexes. I recall it was relatively easy to use the hdfs command to clean up if needed.

- MattyMo
0 Karma

scottgr
New Member

Thanks for getting back to me - it's much appreciated. We're using Hunk. So we have a remote Hadoop cluster that we're accessing in Splunk via a virtual index.

What I'm unsure about is whether it's acceptable to just delete old data within Hadoop - or whether that will cause problems with the Splunk indexing.

Thanks again for the help.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...