Hello together, we moved our data to a new index cluster and since then we are unable to delete events with the "| delete" query. We have an test system, which is a single server instance that will execute the same query. Datasets are identical on both systems. Heres a sample command we are trying to run on our clustered server: index=name1 sourcetype=type1 earliest_time=-3d | delete Since the documentation also noted that sometimes you should eval the indexname to delete events, we also did that index=name1 sourcetype=type1 earliest_time=-3d | eval index=name1 | delete Both queries without the delete command only return a small set of 8 events. If we pipe the result to "delete", then there's no error message or warning. However the returned result table shows that zero files have been deleted. Currently we do have a new search cluster and also our old single search head connected to this index cluster. The old single searchhead was previously also the single instance where we migrated our data from to the new index cluster. Despite that migration nothing has been changed on that servers user/role configuration. Still delete is not working anymore on that search head too. We did follow all instructions on the splunk documentation to ensure that it is not a configuration problem https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Delete Additionally we did the following to troubleshoot the delete process: We tried other datasets/indexes on our cluster server -> same result (working on test server) We checked that our user has “can_delete” roles + created new local users with “can_delete” role Both without success. We also noticed that if the user has no “can_delete” role assigned, the query result will also notify that permissions are missing Since we don’t get that message, we believe that the role is set correctly We compared the authorize.conf from our test and cluster system and didn’t see any differences for those roles We checked all servers splunkd logs after sending the delete command and no information/errors are available We checked that on the file system the bucket folders/files have the correct access permissions (rwx) for “splunk” user - We restarted the index cluster We tried the search query directly on the cluster master, on each search head cluster member and on the old single search head of our clustered system We ran splunk healthcheck with no issues We checked bucket status for the index cluster We checked monitoring console for indexers with no issues We ran | dbinspect for the index and checked if the listed filesystem paths are accessible by the splunk user We ran the search queries in the terminal via “splunk cli”, with no errors or additional messages being shown Both test and cluster servers are running on the same version (8.1.6) The data from the query was also indexed far after the migration
... View more