Getting Data In

Is there a way to track the amount of time searches are queued in Splunk?

athoma31
Explorer

Throughout the day, Splunk runs its internal processes and users run their queries. As the day hits its peak, searches sometime queue up (due to what I believe resources in the SHC being totally consumed).

Is there a way to track how many searches queue throughout the day and for how long they remain queued until they execute (or are abandoned by the user)?

1 Solution

athoma31
Explorer

As a resolution to this question, I ended up using some of the saved searches crafted by gjanders in the comments section of the initial question.

View solution in original post

0 Karma

athoma31
Explorer

As a resolution to this question, I ended up using some of the saved searches crafted by gjanders in the comments section of the initial question.

0 Karma

gjanders
SplunkTrust
SplunkTrust

Glad I could help, please accept your answer so everyone knows that the question is now answered

0 Karma

pruthvikrishnap
Contributor

Hi Athoma,

You can track all the informatiomation related to this under index=_internal sourcetype="splunkd" group="searchscheduler".May be you can create some scheduled alerts daily to identify total number of searches queued per day.

0 Karma

gjanders
SplunkTrust
SplunkTrust

Are you looking at ad-hoc or scheduled searches?

I have searches such as:
SearchHeadLevel - Splunk Users Violating the Search Quota

Which look for queueing of searches, you could modify it to determine how long it was queued for...I also have others such as:
SearchHeadLevel - Users exceeding the disk quota
SearchHeadLevel - Users exceeding the disk quota introspection

AllSplunkEnterpriseLevel - Splunk Scheduler skipped searches and the reason
AllSplunkEnterpriseLevel - Splunk Scheduler excessive delays in executing search

Some of them might give you an idea of where to look, you will find them in my github location or in the app Alerts for Splunk Admins

athoma31
Explorer

both scheduled and ad-hoc searches, preferably with the added determination of whether it originated from a user or the system.

I have looked into your GitHub and fiddled around with "SearchHeadLevel - Splunk Users Violating the Search Quota" on my end and it helps to see a count of how many times a search queues up. I'll continue to look into it to see if there is a way to map it into a timechart.

0 Karma

gjanders
SplunkTrust
SplunkTrust

You could likely drop the bin statement and replace that section below it with a timechart of some kind..., most of these alerts were designed to be sent via email which is why there is a list of fields shown at the end.

0 Karma
Get Updates on the Splunk Community!

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...

Observability protocols to know about

Observability protocols define the specifications or formats for collecting, encoding, transporting, and ...

Take Your Breath Away with Splunk Risk-Based Alerting (RBA)

WATCH NOW!The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee ...