Splunk Enterprise

Migrating Hot drive to faster disk for all indexers


We are planning for a migration of hot drives to faster disk for all indexers. The current plan is below, and I wonder if it makes sense, because the plan applies maintenance mode for each indexer, and I grew up with using maintenance mode for the entire operation. Any thoughts?

The current plan -


Event monitoring
Suspend monitoring of alert on the indexers

(Repeat following for each indexer, one at a time)
Splunk Ops:
Put Splunk Cluster in Maintenance Mode
Stop Splunk service on One Indexer

VM Ops
vMotion the existing 2.5TB disk to any Unity datastores
Provision new 2.5TB VM disk from the VSAN datastore

Linux Ops
Rename the existing hot data logical volume "/opt/splunk/hot-data" to "/opt/splunk/hot-data-old"
Create a new volume group and mount the new 2.5TB disk as "/opt/splunk/hot-data"

Splunk Ops
Restart Splunk service on indexer
Take Indexer Cluster out of Maintenance Mode
Review Cluster Master to confirm indexer is processing and rebalancing has started as expected
Wait a few minutes to allow for Splunk to rebalance across all indexers

(Return to top and repeat steps for next indexer)

Splunk Ops: 
Validate service and perform test searches
Check CM Panel - -> Resources/usage/machine  (bottom panel - IOWait Times) and monitor changes in IOWait
Event monitoring
Enable monitoring of alert on the indexers



In addition, Splunk PS suggested to use -


splunk offline --enforce-counts



Not sure if it's the right way since it might need to migrate the ~40TB of cold data, and would slow the entire operation

Labels (2)
Tags (1)
0 Karma


I'm a bit lost here. If you're doing vmotion between datastores why bothering with logical storage operations within vms?



I'm totally agree with @PickleRick. If you have this in VMware or something similar, why you don't use it to do that storage mgigration? In those cases there is no need to do any actions on Splunk side and you can do this without service breaks. Of course if you haven't needed licenses for your virtualisation / storage layer then it's different story. But I expecting that you have linux VGs in use and you can use those to do this without service breaks or at least with minimum reboot etc.

splunk offline --enforce-counters

should use only when you are removing the whole node permanently from cluster!

r. Ismo

Get Updates on the Splunk Community!

Splunk Observability Cloud | Unified Identity - Now Available for Existing Splunk ...

Raise your hand if you’ve already forgotten your username or password when logging into an account. (We can’t ...

Index This | How many sides does a circle have?

February 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Registration for Splunk University is Now Open!

Are you ready for an adventure in learning?   Brace yourselves because Splunk University is back, and it's ...