What's the best way to completely clean an index in a clustered environment?
Currently, there is not really a good way to do this as cleaning the event data from an index will just get replicated back from another cluster node.
That being said, there are two not quite so nice ways of doing it.
For the 1st option, you may find the following link useful:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/RemovedatafromSplunk#How_to_delete
For the 2nd options, take a look at the documentation here:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/Setaretirementandarchivingpolicy
I am super super late but thought I'd add an easy fix for me. I basically just changed the retention policy on the Cluster Master and then pushed out. Checked the indexes shortly after and wa-la..all clean.
[onboarding]
homePath = volume:primary/onboarding/db
coldPath = volume:primary/onboarding/colddb
thawedPath = $SPLUNK_DB/onboarding/thaweddb
repFactor = auto
maxDataSize = auto_high_volume
maxHotSpanSecs = 86401
frozenTimePeriodInSecs = 10
rotatePeriodInSecs = 10
maxHotIdleSecs = 180
Hope this helps someone.
Happy Splunking.
Just to update this thread and share that this stills working
My context:
I deployed recently a cluster env. and did not noticed that i my filesystem was getting close to the limit
I realized when all my indexers status=AutomaticDetention
As i was aware that the data was useless i followed your approach and worked
I know its quite an old post but i recently had to to this in a production environment and wanted to share my experience here:
We had to clean a specific index in a multisite indexer cluster following these steps:
./splunk offline command on each indexer
./splunk clean eventdata --index yourindex
./splunk start on all indexers
This procedure worked fine for us, the master had to rebalance some stuff but didnt' had any remarkable errors or warnings and fixed all buckets as it should.
The above methods are likely fine for smaller environments, but not very easy to address in larger or "overworked" clusters. I've deployed a new index file to the indexer WITHOUT the index to be cleaned (apply cluster-bundle), wait for the system to stablize, then use the OS to delete the index files (default /opt/splunk/var/lib/indexname). Once stabilized, deploy the original index file.
I have not had problems (like the delete command crashing the whole system) with this approach, it seems to be as fast and probably more efficient than the that techniques I've tried.
OK, I now have a ton of PROD data going into my 5 indexers and while this will work, your master node will get very angry when you shut down your indexers.
I think if you turn off the feeds at your forwarder and they can queue everything, you might be OK but I'm not testing it!
This process seems to work.
On my master node, I'm pushing inputs.conf and indexes.conf. I only have test data so I just delete these files AFTER the clean. If you have other indexes, just delete the pertinent info after the clean and then apply the cluster bundle.
Shut down splunk on the indexers and the master node.
Run the clean command on the indexers.
Wait for clean to complete. I only had a couple hours of data so it only took a minute.
Edit or remove the files that you're pushing from the master node.
I just deleted mine since I only had test data.
MASTER (edit or remove)
INDEXERS (edit or remove)
Start splunk on the indexers and the master node.
Apply the cluster bundle from the master node.
Check the status of cluster bundle from the master node.
This worked for me. It's a shame there is no easy way to clean clustered indexes. The master server should have been able to do this. I hope it's coming in a later version.
What would happen if I run on each indexer at nearly same time Splunk offline command,
and than run ./splunk clean eventdata -index
When data is removed I would start every indexer.
This actions would take less than 10 minutes, so the master node would not detect indexer failure.
Place the cluster master into maintenance mode first. New data will be replicated as normal, but if an indexer is unavailable for a while, the cluster master will not trigger replication activity to "fix" the bucket counts for the data from the missing indexer.
@manjosk8
The problem you will run in to is that the cluster master may retain the info on data availability of the peers. In turn this will cause the search head to look for data where it no longer exists.
The bigger issue is that the actions taking 10min or less would be insufficient. On start the buckets are checked and replication begins. Also, the default peer heartbeat is 30 or 60 seconds depending on the version of Splunk you are running.
In the end, these actions would either create a faulty cluster setup or it would not delete the data as expected.
Currently, there is not really a good way to do this as cleaning the event data from an index will just get replicated back from another cluster node.
That being said, there are two not quite so nice ways of doing it.
For the 1st option, you may find the following link useful:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/RemovedatafromSplunk#How_to_delete
For the 2nd options, take a look at the documentation here:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/Setaretirementandarchivingpolicy
I downvoted this post because do not use | delete to clean an index.
Option 2 worked for me, and in my case a rolling restart wasn't even initiated.
The cluster master does not keep the knowledge of the data location without the indexer nodes. If they are all down, then all that the CM knows is that there is no data available anywhere. When the indexers recover, they tell the master that none of them have any data, and that is all the CM knows. The CM's job is not to track each piece of data, but to ensure that any piece of data that is reported to exist is sufficiently replicated, and to tell the search head where they are. It gets the knowledge to do this from the indexers.
The only problem with cleaning each indexer node with the entire cluster down is that the cluster master may not have any knowledge of the data being unavailable.
By design, it is meant to be difficult to delete data in a cluster. The point of clustering is to make data resistant to loss by copying and replicating it. So besides these suggestions, you would have to stop all indexers and clean the indexes on each one manually to remove all the replicas to prevent recovery.
Would cleaning it on each of the nodes not propagate to the replicas?
Or would it be better/possible to set replication/search factor to 1, and then (after a little while, perhaps) clean the index on the nodes?