Would cleaning it on each of the nodes not propagate to the replicas?
Or would it be better/possible to set replication/search factor to 1, and then (after a little while, perhaps) clean the index on the nodes?
Currently, there is not really a good way to do this as cleaning the event data from an index will just get replicated back from another cluster node.
That being said, there are two not quite so nice ways of doing it.
For the 1st option, you may find the following link useful:
For the 2nd options, take a look at the documentation here:
By design, it is meant to be difficult to delete data in a cluster. The point of clustering is to make data resistant to loss by copying and replicating it. So besides these suggestions, you would have to stop all indexers and clean the indexes on each one manually to remove all the replicas to prevent recovery.
The only problem with cleaning each indexer node with the entire cluster down is that the cluster master may not have any knowledge of the data being unavailable.
The cluster master does not keep the knowledge of the data location without the indexer nodes. If they are all down, then all that the CM knows is that there is no data available anywhere. When the indexers recover, they tell the master that none of them have any data, and that is all the CM knows. The CM's job is not to track each piece of data, but to ensure that any piece of data that is reported to exist is sufficiently replicated, and to tell the search head where they are. It gets the knowledge to do this from the indexers.
What would happen if I run on each indexer at nearly same time Splunk offline command,
and than run ./splunk clean eventdata -index
When data is removed I would start every indexer.
This actions would take less than 10 minutes, so the master node would not detect indexer failure.
The problem you will run in to is that the cluster master may retain the info on data availability of the peers. In turn this will cause the search head to look for data where it no longer exists.
The bigger issue is that the actions taking 10min or less would be insufficient. On start the buckets are checked and replication begins. Also, the default peer heartbeat is 30 or 60 seconds depending on the version of Splunk you are running.
In the end, these actions would either create a faulty cluster setup or it would not delete the data as expected.