We're using a search head, two indexers (not clustered) as search peers and deployment server to deploy apps to all this roles and all forwarders. For clustered indexers, there's the option to perform a rolling restart in order to keep it available at all times.
For non-clustered indexers, there seems to be no such option. How can I make sure one indexer is always available when an app pulled from the DS is restarting splunk?
It's clear that this doesn't make sense since only half the data would be available with only one indexer availeble. It's also clear that forwarders will queue up while no indexer is available, so it's not a real problem. Still I'm wondering how to make sure that indexers are restarted in a rolling fashion. Maybe through different quiesce timeouts?
Your easiest solution might be the simplest one - On each indexer, change the setting in the deploymentclient.conf that tells it how often it checks for updates to something much longer, like 15 or 30 minutes. (This assumes you can tolerate several minutes of lag in them picking up changes). Then restart the two indexers separated by about half that amount of time. When they come back it starts the timer for the next check and update.
(Sorry I can't tell you exactly which setting, Splunk docs is down at the moment, but it's pretty obvious when you see it).
If you do that, each should then, more or less, check for changes and restart at slightly different times. If one restarts faster than the other they'll "drift" fairly quickly, but this might work for you.
IMO, I would not be concerned at all about this problem, and indeed I'd try to make them both go down at the same time. The point being that I'd rather them have a short period of no data than a longer period of wrong data because they only have half. And... why not cluster? The only real overhead is the Cluster Master (which can be a smallish VM), then even if you leave RF/SF=1 so you don't replicate data, you'll get this rolling restart for free... But that's another topic. 🙂