Hello together, we moved our data to a new index cluster and since then we are unable to delete events with the "| delete" query. We have an test system, which is a single server instance that will execute the same query. Datasets are identical on both systems.
Heres a sample command we are trying to run on our clustered server:
index=name1 sourcetype=type1 earliest_time=-3d | delete
Since the documentation also noted that sometimes you should eval the indexname to delete events, we also did that
index=name1 sourcetype=type1 earliest_time=-3d | eval index=name1 | delete
Both queries without the delete command only return a small set of 8 events. If we pipe the result to "delete", then there's no error message or warning. However the returned result table shows that zero files have been deleted. Currently we do have a new search cluster and also our old single search head connected to this index cluster. The old single searchhead was previously also the single instance where we migrated our data from to the new index cluster. Despite that migration nothing has been changed on that servers user/role configuration. Still delete is not working anymore on that search head too.
We did follow all instructions on the splunk documentation to ensure that it is not a configuration problem https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Delete
Additionally we did the following to troubleshoot the delete process:
good
Did you check the search log and splunkd.log to see if there are any significant messages?
I ran the search again and the most significant output from the "_audit" log shows:
Audit:[timestamp=10-15-2021 15:07:02.928, user=*****, action=delete_by_keyword, info=granted ][n/a]
searching _internal didn't show any error or warnings in splunkd.logs in that time. Neither any Info which is related or helpful.
The audit log shows the delete command was allowed.
Anything in search.log (in Job Inspector)?