Splunk Enterprise

How to troubleshoot a scheduler that has a high amount of concurrent searches running constantly?

joshiro
Communicator

We are running a SHC with Splunk Enterprise OnPrem 9.0.1 and noticed that the concurrent searches in one of the nodes is way higher than the rest (3 times aprox.) even though the scheduler delegation shows its delegating evenly across the nodes.

Most of the scheduled searches are from an app that runs dbx queries to keep updated some lookups, these are scheduled to run a few times a week but appear to be running constantly in the scheduler.

These concurrent searches run constantly even after a restart of the node.

It doesnt happen in a single instance with the same apps, so we think it is a clustering issue.

How can we troubleshoot/debug this behaviour?

Labels (2)
0 Karma

joshiro
Communicator

The search concurrency count in each SH node appears constant even though we are not running rt searches.
It seems that the scheduler in the SH cluster has some stuck processes running constantly even after a restart.

Any ideas on how to clean stuck processes on the scheduler?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Dashboard Studio Challenge - Learn New Tricks, Showcase Your Skills, and Win Prizes!

Reimagine what you can do with your dashboards. Dashboard Studio is Splunk’s newest dashboard builder to ...

Introducing Edge Processor: Next Gen Data Transformation

We get it - not only can it take a lot of time, money and resources to get data into Splunk, but it also takes ...

Take the 2021 Splunk Career Survey for $50 in Amazon Cash

Help us learn about how Splunk has impacted your career by taking the 2021 Splunk Career Survey. Last year’s ...