Deployment Architecture

High IO on 2/33 indexers brought down whole cluster

kyaparla
Path Finder

So, 2 of our indexers sometime have very high I/O due to a known issue,  but this is causing index queueing on all our 31 other indexers in same cluster.  When we turn off the 2 indexers that are going to have high I/O,  we dont see any issue.

We are assuming that replication to these problem indexers are blocking other indexers and causing ripple effect across cluster.  Is it expected that the cluster behaves this way?  

Are there any configuration to optimize or create dedicated threads only for replication with out blocking indexing and searching?

Labels (1)
0 Karma

Splunk720Dude
Loves-to-Learn

Have you gotten any information on this?

0 Karma
Get Updates on the Splunk Community!

Fall Into Learning with New Splunk Education Courses

Every month, Splunk Education releases new courses to help you branch out, strengthen your data science roots, ...

Super Optimize your Splunk Stats Searches: Unlocking the Power of tstats, TERM, and ...

By Martin Hettervik, Senior Consultant and Team Leader at Accelerate at Iver, Splunk MVPThe stats command is ...

How Splunk Observability Cloud Prevented a Major Payment Crisis in Minutes

Your bank's payment processing system is humming along during a busy afternoon, handling millions in hourly ...