We use 12 indexers in a cluster. They have the same exact hardware and install dates on all of them. However, there are 3 particular indexers that look to degrade at nearly 3 times the rate of the rest of them.
I'm wondering what some possibilities could be for this? Or if anyone else has ever ran into something similar? This is hardware wear out on the hard drives specifically.
hello there,
like @ddrillic mentioned, what do you mean by "degrade / wear"?
Sometimes, it happens that 1 or more indexers are being "targeted" by a forwarder/s who will stop load balance, therefore will accumulate more data. Other times, I have seen outputs.conf of "heavier loads" like syslog servers (for example) configured to only a portion of the indexers. Can happen that all your data is pointed at x initial indexers because you added indexers and forgot to update outputs.conf.
Also, when more data is written to a couple of indexers, they serve most of the data to the search head/s. So now you have even more load on them.
I recommend to start by checking data ingestion by indexer (splunk_server) and continue by checking the job inspector and see which indexer takes the longest to serve data.
hope it helps
degrade/wear out - what do you mean by that?