Deployment Architecture

High IO on 2/33 indexers brought down whole cluster

kyaparla
Path Finder

So, 2 of our indexers sometime have very high I/O due to a known issue,  but this is causing index queueing on all our 31 other indexers in same cluster.  When we turn off the 2 indexers that are going to have high I/O,  we dont see any issue.

We are assuming that replication to these problem indexers are blocking other indexers and causing ripple effect across cluster.  Is it expected that the cluster behaves this way?  

Are there any configuration to optimize or create dedicated threads only for replication with out blocking indexing and searching?

Labels (1)
0 Karma

Splunk720Dude
Loves-to-Learn

Have you gotten any information on this?

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Beyond Detection: How Splunk and Cisco Integrated Security Platforms Transform ...

Financial services organizations face an impossible equation: maintain 99.9% uptime for mission-critical ...

Customer success is front and center at .conf25

Hi Splunkers, If you are not able to be at .conf25 in person, you can still learn about all the latest news ...

.conf25 Global Broadcast: Don’t Miss a Moment

Hello Splunkers, .conf25 is only a click away.  Not able to make it to .conf25 in person? No worries, you can ...