Deployment Architecture

How to clean a clustered index?

dart
Splunk Employee
Splunk Employee

What's the best way to completely clean an index in a clustered environment?

Labels (1)
Tags (3)
1 Solution

Rob
Splunk Employee
Splunk Employee

Currently, there is not really a good way to do this as cleaning the event data from an index will just get replicated back from another cluster node.

That being said, there are two not quite so nice ways of doing it.

  1. With a user that has the can_delete permissions, pipe all the event data to be removed to the delete command. Naturally, this means that all the caveats for using the delete command apply. (Data is not removed from disk, etc.)
  2. Make sure you stop indexing data to the index your are about to clean and alter your data retention policy to be extremely short. This will roll all the buckets to frozen and hence clear out the index. Once all the data has been removed from the index on all the peers, the retention policy can be set back to its original settings in order to allow for new data to be indexed.

For the 1st option, you may find the following link useful:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/RemovedatafromSplunk#How_to_delete
For the 2nd options, take a look at the documentation here:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/Setaretirementandarchivingpolicy

View solution in original post

sahr
Path Finder

I am super super late but thought I'd add an easy fix for me. I basically just changed the retention policy on the Cluster Master and then pushed out. Checked the indexes shortly after and wa-la..all clean.

[onboarding]
homePath = volume:primary/onboarding/db
coldPath = volume:primary/onboarding/colddb
thawedPath = $SPLUNK_DB/onboarding/thaweddb
repFactor = auto
maxDataSize = auto_high_volume
maxHotSpanSecs = 86401
frozenTimePeriodInSecs = 10
rotatePeriodInSecs = 10
maxHotIdleSecs = 180

Hope this helps someone.

Happy Splunking.

FFZ
Engager

Just to update this thread and share that this stills working

My context:

I deployed recently a cluster env. and did not noticed that i my filesystem was getting close to the limit
I realized when all my indexers status=AutomaticDetention
As i was aware that the data was useless i followed your approach and worked

0 Karma

claudio_manig
Communicator

I know its quite an old post but i recently had to to this in a production environment and wanted to share my experience here:

We had to clean a specific index in a multisite indexer cluster following these steps:
./splunk offline command on each indexer
./splunk clean eventdata --index yourindex
./splunk start on all indexers

This procedure worked fine for us, the master had to rebalance some stuff but didnt' had any remarkable errors or warnings and fixed all buckets as it should.

gbowden_pheaa
Path Finder

The above methods are likely fine for smaller environments, but not very easy to address in larger or "overworked" clusters. I've deployed a new index file to the indexer WITHOUT the index to be cleaned (apply cluster-bundle), wait for the system to stablize, then use the OS to delete the index files (default /opt/splunk/var/lib/indexname). Once stabilized, deploy the original index file.

I have not had problems (like the delete command crashing the whole system) with this approach, it seems to be as fast and probably more efficient than the that techniques I've tried.

0 Karma

dfronck
Communicator

OK, I now have a ton of PROD data going into my 5 indexers and while this will work, your master node will get very angry when you shut down your indexers.

I think if you turn off the feeds at your forwarder and they can queue everything, you might be OK but I'm not testing it!


This process seems to work.

On my master node, I'm pushing inputs.conf and indexes.conf. I only have test data so I just delete these files AFTER the clean. If you have other indexes, just delete the pertinent info after the clean and then apply the cluster bundle.

  • MASTER: /opt/splunk/etc/master-apps/_cluster/local/inputs.conf
  • MASTER: /opt/splunk/etc/master-apps/_cluster/local/indexes.conf
  • INDEXER: /opt/splunk/etc/slave-apps/_cluster/local/inputs.conf
  • INDEXER: /opt/splunk/etc/slave-apps/_cluster/local/indexes.conf

Shut down splunk on the indexers and the master node.

  • /opt/splunk/bin/splunk stop

Run the clean command on the indexers.

  • /opt/splunk/bin/splunk clean eventdata -index test_log

Wait for clean to complete. I only had a couple hours of data so it only took a minute.

Edit or remove the files that you're pushing from the master node.
I just deleted mine since I only had test data.

MASTER (edit or remove)

  • rm /opt/splunk/etc/master-apps/_cluster/local/inputs.conf
  • rm /opt/splunk/etc/master-apps/_cluster/local/indexes.conf

INDEXERS (edit or remove)

  • rm /opt/splunk/etc/slave-apps/_cluster/local/inputs.conf
  • rm /opt/splunk/etc/slave-apps/_cluster/local/indexes.conf

Start splunk on the indexers and the master node.

  • /opt/splunk/bin/splunk start

Apply the cluster bundle from the master node.

  • /opt/splunk/bin/splunk apply cluster-bundle --answer-yes

Check the status of cluster bundle from the master node.

  • /opt/splunk/bin/splunk show cluster-bundle-status

mikaelbje
Motivator

This worked for me. It's a shame there is no easy way to clean clustered indexes. The master server should have been able to do this. I hope it's coming in a later version.

manjosk8
Engager

What would happen if I run on each indexer at nearly same time Splunk offline command,
and than run ./splunk clean eventdata -index command on each indexer?

When data is removed I would start every indexer.
This actions would take less than 10 minutes, so the master node would not detect indexer failure.

0 Karma

sowings
Splunk Employee
Splunk Employee

Place the cluster master into maintenance mode first. New data will be replicated as normal, but if an indexer is unavailable for a while, the cluster master will not trigger replication activity to "fix" the bucket counts for the data from the missing indexer.

0 Karma

Rob
Splunk Employee
Splunk Employee

@manjosk8

The problem you will run in to is that the cluster master may retain the info on data availability of the peers. In turn this will cause the search head to look for data where it no longer exists.

The bigger issue is that the actions taking 10min or less would be insufficient. On start the buckets are checked and replication begins. Also, the default peer heartbeat is 30 or 60 seconds depending on the version of Splunk you are running.

In the end, these actions would either create a faulty cluster setup or it would not delete the data as expected.

0 Karma

Rob
Splunk Employee
Splunk Employee

Currently, there is not really a good way to do this as cleaning the event data from an index will just get replicated back from another cluster node.

That being said, there are two not quite so nice ways of doing it.

  1. With a user that has the can_delete permissions, pipe all the event data to be removed to the delete command. Naturally, this means that all the caveats for using the delete command apply. (Data is not removed from disk, etc.)
  2. Make sure you stop indexing data to the index your are about to clean and alter your data retention policy to be extremely short. This will roll all the buckets to frozen and hence clear out the index. Once all the data has been removed from the index on all the peers, the retention policy can be set back to its original settings in order to allow for new data to be indexed.

For the 1st option, you may find the following link useful:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/RemovedatafromSplunk#How_to_delete
For the 2nd options, take a look at the documentation here:
http://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/Setaretirementandarchivingpolicy

dxu_splunk
Splunk Employee
Splunk Employee

I downvoted this post because do not use | delete to clean an index.

0 Karma

1StopBloke
Explorer

Option 2 worked for me, and in my case a rolling restart wasn't even initiated.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

The cluster master does not keep the knowledge of the data location without the indexer nodes. If they are all down, then all that the CM knows is that there is no data available anywhere. When the indexers recover, they tell the master that none of them have any data, and that is all the CM knows. The CM's job is not to track each piece of data, but to ensure that any piece of data that is reported to exist is sufficiently replicated, and to tell the search head where they are. It gets the knowledge to do this from the indexers.

0 Karma

Rob
Splunk Employee
Splunk Employee

The only problem with cleaning each indexer node with the entire cluster down is that the cluster master may not have any knowledge of the data being unavailable.

0 Karma

gkanapathy
Splunk Employee
Splunk Employee

By design, it is meant to be difficult to delete data in a cluster. The point of clustering is to make data resistant to loss by copying and replicating it. So besides these suggestions, you would have to stop all indexers and clean the indexes on each one manually to remove all the replicas to prevent recovery.

0 Karma

kristian_kolb
Ultra Champion

Would cleaning it on each of the nodes not propagate to the replicas?

Or would it be better/possible to set replication/search factor to 1, and then (after a little while, perhaps) clean the index on the nodes?

Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...