I have a search that writes a lookup file at the end. I also have searches that end in a collect command. And there are other things that I would like to do that cause side-effects. What I am looking for is a way to abort a search before getting to the commands with side effects.
For example,
index=abc ...
...
|abort condition=[count < 50]
...
|collect index=summary
|outputlookup abc.csv
|my_custom_command_to_post_to_twitter
|etc...
I could probably do it all by wrapping the latter half in a map command, but I am looking for an easier way.
This explains how to do it with map
but you can just as easily do it with a subsearch
:
https://answers.splunk.com/answers/172541/is-it-possible-to-purposely-cause-a-scheduled-sear.html
Also, instead of having the search cause an error with bogus time values, you could just as easily replace the entire search with |noop
.
You could make use of the override_if_empty attribute of outputlookup:
| dbxquery connection=xxx query="select * from tablex"
|massage the data
|discard all events when an error condition occurs, maybe using:
| eventstats count as dbresults
| where dbresults > 10000
| outputlookup tablex_fast_lookup override_if_empty=false
How are you running the search, scheduled or ad-hoc?
I'm running many scheduled searches. In SQL, there is the "on error" capability that let's you avoid taking further action if something broke before.
Many of our searches do something like the following:
|dbxquery connection=xxx query="select * from tablex"
|massage the data
|outputlookup tablex_fast_lookup
If the SQL command dies due to a SQL issue, then we end up writing an empty lookup file. Once the lookup file is empty, dozens of other things break. I suppose I could convert it to a kv-store and merge data to the lookup with a last update timestamp. Then we could purge older data based on the timestamp.
I figured that if I could abort the command, then the lookup file would have stale data, but it would not be empty.
For just this particular problem (empty lookup when no data), you could do something like this
Updated
|dbxquery connection=xxx query="select * from tablex"
|massage the data
| eval type="new" | append [ | inputlookup tablex_fast_lookup | eval type="old"]
| eventstats count(eval(type="new")) as hasData | eval filter=if(hasData>0,"new","old")
| where type=filter | fields - type hasData filter
|outputlookup tablex_fast_lookup
Basically, if base search has no data, re-add the existing data from lookup again.
Taking it one step further, I made a global macro:
[outputlookup_if_data(1)]
args = lookup_file
definition = eval upx_type="new" | append [ |inputlookup $lookup_file$ | eval u
px_type="old"] | eventstats count(eval(upx_type="new")) as upx_hasData | eval u
px_filter=if(upx_hasData>0,"new","old") | where upx_type=upx_filter | fields -
upx_type upx_hasData upx_filter | outputlookup $lookup_file$
For the append, I think you mean "inputlookup". I know what you mean and this works well for that case.
-Thanks
This explains how to do it with map
but you can just as easily do it with a subsearch
:
https://answers.splunk.com/answers/172541/is-it-possible-to-purposely-cause-a-scheduled-sear.html
Also, instead of having the search cause an error with bogus time values, you could just as easily replace the entire search with |noop
.
I'd rather not do it with a map, because some of my finishing commands already leverage map. For the "Accept" how would you do it with a subsearch?
Or you can do it with a subsearch
, like this:
YOUR BASE SEARCH HERE [| noop
| stats count AS blackout
| addinfo
| eval blackout=case((SomeLogic="For Blackout Here"), "YES",
(OtherLogic="For Blackout Here"), "YES",
true(),"NO")
| eval earliestMaybe=if((blackout=="NO"), info_min_time, now())
| eval latestMaybe=if((blackout=="NO"), info_max_time, 0)
| eval search="earliest=" . earliestMaybe . " latest=" . info_max_time]
| YOUR POST SEARCH HERE