Deployment Architecture

High IO on 2/33 indexers brought down whole cluster

kyaparla
Path Finder

So, 2 of our indexers sometime have very high I/O due to a known issue,  but this is causing index queueing on all our 31 other indexers in same cluster.  When we turn off the 2 indexers that are going to have high I/O,  we dont see any issue.

We are assuming that replication to these problem indexers are blocking other indexers and causing ripple effect across cluster.  Is it expected that the cluster behaves this way?  

Are there any configuration to optimize or create dedicated threads only for replication with out blocking indexing and searching?

Labels (1)
0 Karma

Splunk720Dude
Loves-to-Learn

Have you gotten any information on this?

0 Karma
Get Updates on the Splunk Community!

Why You Can't Miss .conf25: Unleashing the Power of Agentic AI with Splunk & Cisco

The Defining Technology Movement of Our Lifetime The advent of agentic AI is arguably the defining technology ...

Deep Dive into Federated Analytics: Unlocking the Full Power of Your Security Data

In today’s complex digital landscape, security teams face increasing pressure to protect sprawling data across ...

Your summer travels continue with new course releases

Summer in the Northern hemisphere is in full swing, and is often a time to travel and explore. If your summer ...