Hello Splunk Gurus-
We have noticed that a Splunk job does not end gracefully (version 6.6.3) if the post-pipe commands encounter missing fields, specifically in the map command.
example:
index="_internal" sourcetype="scheduler" status=skipped savedsearch_name!="_ACCELERATE*" "Your maximum number of concurrent searches has been reached."
| eval time=strftime(_time, "%H:%M:%S")
| stats count, values(time) as times, values(reason) as reasons, by user, savedsearch_id, savedsearch_name
| map search="localop | rest /services/authentication/users/$user$ | fields email | eval user=$user$, savedsearch=$savedsearch_name$, count=$count$, times=$times$, reasons=$reasons$, ssid=$savedsearch_id$"
| rex field=ssid ";(?[^;]+)"
| fields app, savedsearch, reasons, count, times, email
The map command is expecting the user variable to be populated with data. If the specific "Your maximum number..." is not present, thus the user field is not present. Instead of just completing with "No results found", it throws a nasty error and shows as a failure. Of course, we are searching and alerting on failed searches, so this gets annoying.
Is there an eval to NULL we could perform to prevent this from failing and simply present "No results found"?
On a larger scale, is it possible, in general for Splunk to have some Pre- or Post-pipe logic to actually stop the job if no results are encountered before a large amount of processing is done, thus saving CPU and searchhead resources?
Thanks,
Mike
... View more