Monitoring Splunk

Splunk physical indexers require maintenance to move connection/cables to NFS mount for frozen storage

aborgna512
Explorer

My organization is running 9 physical indexers that are connected to NFS mounts where the frozen buckets are rolled to after aging out. There is a project in planning to move/migrate the physical cable connections on indexers that feeds to the NFS mount. I'm looking for advice on the best strategy to keep the indexer cluster ecosystem peaceful during migration. The physical connection from indexer to NFS will need to be disconnected for 3 days-1 week to allow for migration of the connection/cables to their future switch homes. The directory that holds the cold buckets has more than adequate storage to hold more/larger buckets during this migration time.

Is there a recommended method/process to extend/expand the volume on cold buckets in the cluster that can be implemented temporarily during this maintenance window? It could then be restored to original configuration once the NFS mounts are reconnected?

I was thinking that increasing the cold storage max volume(coldPath.maxDataSizeMB ) in indexes.conf prior to disconnect might provide this type of cover. However, I would love a second opinion given this is the first time I've been encountered with this type of request.

Any insight/advise that can be provided would be greatly appreciated.

Labels (2)
Tags (1)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

This is not the kind of thing that comes up often (or ever) so there's little to no history from which to learn.  My advise is to increase frozenTimePeriodInSecs by at least 864000 (10 days) during the work and then change it back to the original values after the work is complete.  Do ensure the size-related settings are high enough that the max will not be reached during the NFS outage.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

aborgna512
Explorer

@richgalloway. Our hardware/cabling teams have decided to perform the network cable swap in one night. I won't need to make any changes to the index aging configurations since cold-frozen connection on all indexers will be done in a matter of a few hours and we'll be online to monitor indexer health during the process. Thank you so much for the feedback and we'll make a note of the solution if needed in the future.

richgalloway
SplunkTrust
SplunkTrust

This is not the kind of thing that comes up often (or ever) so there's little to no history from which to learn.  My advise is to increase frozenTimePeriodInSecs by at least 864000 (10 days) during the work and then change it back to the original values after the work is complete.  Do ensure the size-related settings are high enough that the max will not be reached during the NFS outage.

---
If this reply helps you, Karma would be appreciated.
0 Karma

aborgna512
Explorer

@richgallowayThank you for the quick response. That stanza value is staggered for our indexes based on volume/priority ranging from 100 days to 2 years. It makes sense to me that increasing frozenTimePeriodInSecs by 10 days should keep the data in cold storage for the time frame of the work. Our /splunk_cold volume settings are currently fixed near the max of that directory. If space becomes a problem, we'll know about it from internal monitoring as we have alert triggers set up when it hits >90% & 95% utilization on the file system.

0 Karma
Get Updates on the Splunk Community!

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...