Splunk Enterprise

How to troubleshoot a scheduler that has a high amount of concurrent searches running constantly?

joshiro
Communicator

We are running a SHC with Splunk Enterprise OnPrem 9.0.1 and noticed that the concurrent searches in one of the nodes is way higher than the rest (3 times aprox.) even though the scheduler delegation shows its delegating evenly across the nodes.

Most of the scheduled searches are from an app that runs dbx queries to keep updated some lookups, these are scheduled to run a few times a week but appear to be running constantly in the scheduler.

These concurrent searches run constantly even after a restart of the node.

It doesnt happen in a single instance with the same apps, so we think it is a clustering issue.

How can we troubleshoot/debug this behaviour?

Labels (2)
0 Karma

joshiro
Communicator

The search concurrency count in each SH node appears constant even though we are not running rt searches.
It seems that the scheduler in the SH cluster has some stuck processes running constantly even after a restart.

Any ideas on how to clean stuck processes on the scheduler?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Optimize Cloud Monitoring

  TECH TALKS Optimize Cloud Monitoring Tuesday, August 13, 2024  |  11:00AM–12:00PM PST   Register to ...

What's New in Splunk Cloud Platform 9.2.2403?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.2.2403! Analysts can ...

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...