Deployment Architecture

High IO on 2/33 indexers brought down whole cluster

kyaparla
Path Finder

So, 2 of our indexers sometime have very high I/O due to a known issue,  but this is causing index queueing on all our 31 other indexers in same cluster.  When we turn off the 2 indexers that are going to have high I/O,  we dont see any issue.

We are assuming that replication to these problem indexers are blocking other indexers and causing ripple effect across cluster.  Is it expected that the cluster behaves this way?  

Are there any configuration to optimize or create dedicated threads only for replication with out blocking indexing and searching?

Labels (1)
0 Karma

Splunk720Dude
Loves-to-Learn

Have you gotten any information on this?

0 Karma
Get Updates on the Splunk Community!

New Year, New Changes for Splunk Certifications

As we embrace a new year, we’re making a small but important update to the Splunk Certification ...

Stay Connected: Your Guide to January Tech Talks, Office Hours, and Webinars!

What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where ...

[Puzzles] Solve, Learn, Repeat: Reprocessing XML into Fixed-Length Events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...