Splunk Enterprise

How to troubleshoot a scheduler that has a high amount of concurrent searches running constantly?

joshiro
Communicator

We are running a SHC with Splunk Enterprise OnPrem 9.0.1 and noticed that the concurrent searches in one of the nodes is way higher than the rest (3 times aprox.) even though the scheduler delegation shows its delegating evenly across the nodes.

Most of the scheduled searches are from an app that runs dbx queries to keep updated some lookups, these are scheduled to run a few times a week but appear to be running constantly in the scheduler.

These concurrent searches run constantly even after a restart of the node.

It doesnt happen in a single instance with the same apps, so we think it is a clustering issue.

How can we troubleshoot/debug this behaviour?

Labels (2)
0 Karma

joshiro
Communicator

The search concurrency count in each SH node appears constant even though we are not running rt searches.
It seems that the scheduler in the SH cluster has some stuck processes running constantly even after a restart.

Any ideas on how to clean stuck processes on the scheduler?

Tags (1)
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...