lets say daily I recieve 5 files, and I am indexing 5 files and running my query to generate the report. Now, my requirement is after running the query , data from the index should be deleted, Next day I will put another 5 files in the same index and run the query to generate the report.
Kindly help me in this regard.
Thanks in Advance,
There are two ways I know. The first is using clean from splunk cli and the delete command within search.
The clean command cleans (deletes) all eventdata from the index. The catch is you must stop your splunk indexer and then restart after running the command.
$SPLUNKHOME/bin/splunk clean eventdata -index myindex -f
The search delete command has to be granted expicity to a user or role, even the admin does have the ability to delete unless specified. All the delete command is a sudo delete it only flags the events as unsearchable, giving the appearance of being deleted. So you will not reclaim disk space.
source="myfiles*.log" | delete
Once an index is cleaned the data is gone (all eventdata in that index), there is no going back. If you have a distributed search with muliptle indexer and indexes you will have to perform this action on all indexers.
Didn't seem to work. I stopped both splunkd and splunkweb (windows implementation) and issued the 'splunk clean eventdata' command (as admin) - it warned that all data would be lost. I agreed. The CLI indicated that all databases had been cleaned except:
Disabled database 'splunklogger': will not clean
When I restarted splunk services all of the index data was still there and all the settings were still intact.
This has to be completed on each Indexer if your are running a distributed search configuration. Also you can clean with windows admin accoutn and splunk user that has the admin role? Did you specify index or use "clean all -f".
Or for a more automatic approach that involves a little more initial configuring, you can set retention parameters
This can be a bit more tricky, since the rather short retention requirements you have are a bit unusual.
I believe (haven't tried it myself, at least not all of them in this type of combination) that you for your index can set the following parameters;
[your_index] maxTotalDataSizeMB = XXXX maxDataSize = XXXX maxHotBuckets = 1 maxHotIdleSecs = 28800 maxWarmDBCount = 0 coldPath.maxDataSizeMB = 1
where XXX should be something like 10 times more than you would expect the size of your 5 files to be, combined. This is in MB. So if you have 5 files, each about 50 MB in size, set this value to 2500.
In essence what this does (should do) is to take all the incoming events and store them in a
hot bucket. After 8 hours (28800 seconds) the
hot bucket is moved to a
warm bucket. But since the maximum number of allowed
warm buckets is 0, Splunk should move it straight to a
cold bucket. If (and this is another assumption) your indexed data exceeds 1 MB, it will be frozen (i.e. deleted unless you've explicitly configured a
Please note that you should not store any information you wish to keep in this index, or to store more information than you have specified. It will just get deleted.
More information can be found here;
If you decide to try this out, please report back on the outcome. You're the guinea pig in this experiment 🙂
wow - that's a long out of office .... 🙂
could it be, that you used the wrong answer here? the last activity on this was almost exactly two years ago 😉