Deployment Architecture

Why are we seeing vastly different wear out on particular indexers (same exact hard drive same exact install date)?

briancronrath
Contributor

We use 12 indexers in a cluster. They have the same exact hardware and install dates on all of them. However, there are 3 particular indexers that look to degrade at nearly 3 times the rate of the rest of them.

I'm wondering what some possibilities could be for this? Or if anyone else has ever ran into something similar? This is hardware wear out on the hard drives specifically.

0 Karma

adonio
Ultra Champion

hello there,

like @ddrillic mentioned, what do you mean by "degrade / wear"?

Sometimes, it happens that 1 or more indexers are being "targeted" by a forwarder/s who will stop load balance, therefore will accumulate more data. Other times, I have seen outputs.conf of "heavier loads" like syslog servers (for example) configured to only a portion of the indexers. Can happen that all your data is pointed at x initial indexers because you added indexers and forgot to update outputs.conf.

Also, when more data is written to a couple of indexers, they serve most of the data to the search head/s. So now you have even more load on them.

I recommend to start by checking data ingestion by indexer (splunk_server) and continue by checking the job inspector and see which indexer takes the longest to serve data.

hope it helps

0 Karma

ddrillic
Ultra Champion

degrade/wear out - what do you mean by that?

0 Karma
Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...