Is there a way to detect subsearch limits being exceeded in scheduled searches I notice that you can get this info from REST: | rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$
| where isnotnull('messages.error')
| fields id savedsearch_name, app, user, executed_at, search, messages.* And you can kinda join this to the _audit query: index=_audit action=search (has_error_warn=true OR fully_completed_search=false OR info="bad_request")
| eval savedsearch_name = if(savedsearch_name="", "Ad-hoc", savedsearch_name)
| eval search_id = trim(search_id, "'")
| eval search = mvindex(search, 0)
| map search="| rest splunk_server=local /servicesNS/$user$/$app$/search/jobs/$search_id$
| where isnotnull('messages.error')
| fields id savedsearch_name, app, user, executed_at, search, messages.*" But it doesn't really work - I get lots of rest failures reported and the output is bad. You also need to run it when the search artifacts are present. Although my plan was to run this frequently and push the result to a summary index. Has anyone had better success with this? One thought would be to ingest the data that is returned by the rest call (I presume var/run/dispatch). Or might debug level logging help?
... View more