Splunk Enterprise

How to efficiently delete data from index that has high volume?

architkhanna
Path Finder

Hi Splunkers , 

I have a splunk index with 3 source types corresponding to each ticket types. it has millions of record in last 10 months and we have now started re pulling all the data again due to 2 new fields which client wants to onboard.

Since we do not want to keep the older records which does not have the new fields , We need to find out a way on how to identify the data eligible for deletion. Please note, all tickets have updates more than 1 times up to 50 times as well.


Labels (2)
Tags (2)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

As a general rule - you don't "delete" data from indexes. Once the data is ingested and indexed it stays in the buckets until whole bucket is rolled out to frozen and deleted.

True, there is a "delete" command but it doesn't really delete data from the buckets, it just marks it unaccessible. And in production I wouldn't really use that.

Probably the easiest approach for you would be to drop your index completely, recreate it from scratch and reindex the data (yes, indexing it again will count against your license).

0 Karma
Get Updates on the Splunk Community!

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

🔐 Trust at Every Hop: How mTLS in Splunk Enterprise 10.0 Makes Security Simpler

From Idea to Implementation: Why Splunk Built mTLS into Splunk Enterprise 10.0  mTLS wasn’t just a checkbox ...