Splunk Enterprise

Find failed job reason

tmontney
Builder

I want to see any failed job, ad-hoc and scheduled. For instance, I was creating a new search command, and it failed a lot until I got it right. I expect to see the same error, I see in web search, in the logs:

  • | rest /servicesNS/-/-/search/jobs shows a handful over 4 hours. There was far more than that.
  • _audit shows plenty of failed searches, but not the reason
  • _internal doesn't show anything useful
Labels (1)
0 Karma

codebuilder
Influencer

Use the Job Inspector.

https://docs.splunk.com/Documentation/Splunk/8.2.1/Search/ViewsearchjobpropertieswiththeJobInspector

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

tmontney
Builder

After posting, I ended up with this:

| rest /servicesNS/-/-/search/jobs
| eval t=strptime(updated, "%Y-%m-%dT%H:%M:%S.%f-%z")
| where 
    [ makeresults 
    | addinfo
    | eval search=" t >= ". info_min_time. if(info_max_time=="+Infinity",""," AND t <= ".info_max_time) ]
| eval hadError=case(isFailed=1, 1, isNotNull('messages.error'), 1, 1=1, 0)
| search hadError=1
| stats count
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...