Deployment Architecture

The percentage of non high priority searches delayed

bsanjeeva
Explorer

Hi Splunkers,

I am getting below error on Clustered Search heard,

"The percentage of non high priority searches delayed (88%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=1615. Total delayed Searches=1430"

This issue is particularly seen on one app whose index receives large amount of data. How do I fix this issue?

Thanks in advance

Labels (2)

richgalloway
SplunkTrust
SplunkTrust

Check the Scheduler Activity page of the Monitoring Console to try to determine why searches are delayed.  The most likely cause is too many searches trying to run at the same time.  Another likely cause is searches not completing before the next scheduled run-time.

To fix the first cause, reschedule searches so they're spread out across the clock.  It's very common for most searches to try to run at :00 so focus on those first.

To fix the second cause you can make the search more efficient (faster) or reschedule it so it has time to complete before the next run-time.

---
If this reply helps you, Karma would be appreciated.

bsanjeeva
Explorer

@richgalloway , thanks for the suggestions. Here is my observations,

In MC's Scheduler Activity, I see that it issue is present only on one of the three search heads in cluster. Many Reports are skipped and deferred with following reason,

"The maximum number of concurrent running jobs for this historical scheduled search on this cluster has been reached (4)"

Do I have to modify limits.conf ?

Why is this issue seen only one one search head in the cluster?

Thanks

0 Karma

richgalloway
SplunkTrust
SplunkTrust

That is a symptom of the second cause of skipped searches.  The message is saying it is time to run a search, but an earlier invocation of the search is still running.

The fix is either 1) make the search complete sooner; or 2) schedule it to run less frequently.

It only happens on one SH because scheduled searches only run on one search head.

---
If this reply helps you, Karma would be appreciated.

bsanjeeva
Explorer

Now I understand the issue with your explanation. Can modifying limits.conf help in anyway?

Current config in searchhead:

[search]
dispatch_dir_warning_size=7000

Thanks

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Changing limits.conf won't help in this case.  You just can't have more than one instance of the same scheduled search running at the same time.

---
If this reply helps you, Karma would be appreciated.
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...