Hello,
I want to find out which dashboards take a long time to load. So I would like to have a table which shows the runtimes/searchtimes for all dashboards being opened by any user.
It should look something like this:
time dashboard user runtime (in seconds)
2020-05-22 10:02:00 sample_dashboard admin 15
2020-05-22 10:01:00 sample_dashboard admin 20
2020-05-22 10:00:00 sample_dashboard2 user 5
I found two answers from 2016 and 2017 which do not work. (The first one returns empty and the second one lists searches instead of dashboards.)
https://answers.splunk.com/answers/425215/how-can-i-measure-the-dashboard-load-time.html
https://answers.splunk.com/answers/488539/how-to-write-a-search-to-find-out-the-average-dash.html
Can anybody help?
To answer my own question, I found that the REST endpoint /services/search/jobs contains a lot of useful information.
Searching for rest /services/search/jobs will list all past Splunk searches (similar to the Activity/Jobs view). I found that a search from a dasboard will have the "provenance" field like this: UI:Dashboard:sample_simple_dashboard. So I can extract the name of a dashboard for all dashboard searches.
One difficulty in determining the runtime of a dashboard is that there is no clear endtime. A user can select various inputs and time windows on a dashboard which will continuously trigger new searches.
Luckely, there is the "published" field which tells when a search was triggered. So I can group together dashboard searches based on the "published" field.
This is the search i created:
| rest /services/search/jobs
| rename dispatchState as Status eai:acl.app as App title as Search author as User runDuration as Runtime published as Published id as ID provenance as Provenance
| rex field=Provenance "UI:Dashboard:(?<Dashboard>.+)" | search Dashboard=*
| rex field=ID "(?<JobId>[^//]*)$"
| table App,Dashboard,User,Published,Runtime,Status
| stats sum(Runtime) as Runtime count as searches values(Status) as Status by App,Dashboard,User,Published
| eval Status=mvjoin(mvsort(mvdedup(split(mvjoin(Status,","),","))),",")
| eval Runtime=round(Runtime,1)
| sort 0 -Published
To answer my own question, I found that the REST endpoint /services/search/jobs contains a lot of useful information.
Searching for rest /services/search/jobs will list all past Splunk searches (similar to the Activity/Jobs view). I found that a search from a dasboard will have the "provenance" field like this: UI:Dashboard:sample_simple_dashboard. So I can extract the name of a dashboard for all dashboard searches.
One difficulty in determining the runtime of a dashboard is that there is no clear endtime. A user can select various inputs and time windows on a dashboard which will continuously trigger new searches.
Luckely, there is the "published" field which tells when a search was triggered. So I can group together dashboard searches based on the "published" field.
This is the search i created:
| rest /services/search/jobs
| rename dispatchState as Status eai:acl.app as App title as Search author as User runDuration as Runtime published as Published id as ID provenance as Provenance
| rex field=Provenance "UI:Dashboard:(?<Dashboard>.+)" | search Dashboard=*
| rex field=ID "(?<JobId>[^//]*)$"
| table App,Dashboard,User,Published,Runtime,Status
| stats sum(Runtime) as Runtime count as searches values(Status) as Status by App,Dashboard,User,Published
| eval Status=mvjoin(mvsort(mvdedup(split(mvjoin(Status,","),","))),",")
| eval Runtime=round(Runtime,1)
| sort 0 -Published
To add, I'm not sure if there's a significant difference if you'll not use the runtime of the searches itself.
Check this out also:
https://answers.splunk.com/answers/235005/including-search-run-time-in-search-results.html
Those two answers were created using older versions of Splunk and no doubt will require tweaking to work with your version.
Dashboards are just collections of searches so having a list of searches is not all bad.