Deployment Architecture

High IO on 2/33 indexers brought down whole cluster

kyaparla
Path Finder

So, 2 of our indexers sometime have very high I/O due to a known issue,  but this is causing index queueing on all our 31 other indexers in same cluster.  When we turn off the 2 indexers that are going to have high I/O,  we dont see any issue.

We are assuming that replication to these problem indexers are blocking other indexers and causing ripple effect across cluster.  Is it expected that the cluster behaves this way?  

Are there any configuration to optimize or create dedicated threads only for replication with out blocking indexing and searching?

Labels (1)
0 Karma

Splunk720Dude
Loves-to-Learn

Have you gotten any information on this?

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...