Throughout the day, Splunk runs its internal processes and users run their queries. As the day hits its peak, searches sometime queue up (due to what I believe resources in the SHC being totally consumed).
Is there a way to track how many searches queue throughout the day and for how long they remain queued until they execute (or are abandoned by the user)?
Are you looking at ad-hoc or scheduled searches?
I have searches such as:
SearchHeadLevel - Splunk Users Violating the Search Quota
Which look for queueing of searches, you could modify it to determine how long it was queued for...I also have others such as:
SearchHeadLevel - Users exceeding the disk quota
SearchHeadLevel - Users exceeding the disk quota introspection
AllSplunkEnterpriseLevel - Splunk Scheduler skipped searches and the reason
AllSplunkEnterpriseLevel - Splunk Scheduler excessive delays in executing search
both scheduled and ad-hoc searches, preferably with the added determination of whether it originated from a user or the system.
I have looked into your GitHub and fiddled around with "SearchHeadLevel - Splunk Users Violating the Search Quota" on my end and it helps to see a count of how many times a search queues up. I'll continue to look into it to see if there is a way to map it into a timechart.
You could likely drop the bin statement and replace that section below it with a timechart of some kind..., most of these alerts were designed to be sent via email which is why there is a list of fields shown at the end.
You can track all the informatiomation related to this under index=_internal sourcetype="splunkd" group="searchscheduler".May be you can create some scheduled alerts daily to identify total number of searches queued per day.
As a resolution to this question, I ended up using some of the saved searches crafted by gjanders in the comments section of the initial question.
Glad I could help, please accept your answer so everyone knows that the question is now answered