I am planning on upgrading our Splunk infrastructure which requires our Splunk indexers to go offline for few minutes. I am using smartstore for splunk indexing. Before I start the upgrade and take down our indexers, I want to roll over all the data that is in hot bucket to the smartstore and then start the upgrade. What is the best way to do this ?
Appreciate your response @marnall . My questions comes from our recent scenario, where we did a splunk upgrade using Infrastructure as code and we are using smartstore for indexing. We were in the opinion that the data get moved to external storage once it hits the warm bucket but unfortunately, we lost some of the data during the migration. The only reason we could think of is the hot buckets which are stored locally did not get rolled over to warm bucket which could have been available in the external storage and are available for later searches. We have another migration scheduled for this weekend, so I want to be cent percent sure we don't have any data loss.
No. It doesn't work like that. A bucket doesn't "roll to smartstore". A bucket rolls to warm and cache manager uploads it to smartstore when it can. So if you:
1) Didn't give Splunk a chance to upload the bucket to smartstore and
2) Didn't have more copies of a bucket (or just destroyed all instances at once)
yes, you might have experienced data loss.
Infrastructure as code? Does that mean you are terminating the indexers rather than shutting them off, upgrading, then turning them on again?
You can gently tell the indexers to go offline using "/opt/splunk/bin/splunk offline" . They will stop indexing, roll hot buckets to warm and upload them to remote storage, then you can bring them up again and they will rejoin the cluster.
Ref: https://docs.splunk.com/Documentation/Splunk/9.3.1/Indexer/Takeapeeroffline