Splunk Search

How abort a search based on a condition?

reed_kelly
Contributor

I have a search that writes a lookup file at the end. I also have searches that end in a collect command. And there are other things that I would like to do that cause side-effects. What I am looking for is a way to abort a search before getting to the commands with side effects.

For example,

index=abc ...
...
|abort condition=[count < 50]
...
|collect index=summary
|outputlookup abc.csv
|my_custom_command_to_post_to_twitter
|etc...

I could probably do it all by wrapping the latter half in a map command, but I am looking for an easier way.

Tags (3)
1 Solution

woodcock
Esteemed Legend

This explains how to do it with map but you can just as easily do it with a subsearch:

https://answers.splunk.com/answers/172541/is-it-possible-to-purposely-cause-a-scheduled-sear.html

Also, instead of having the search cause an error with bogus time values, you could just as easily replace the entire search with |noop.

View solution in original post

0 Karma

mmol
Explorer

You could make use of the override_if_empty attribute of outputlookup:

 | dbxquery connection=xxx query="select * from tablex"
 |massage the data
 |discard all events when an error condition occurs, maybe using:
 | eventstats count as dbresults
 | where dbresults > 10000 
 | outputlookup tablex_fast_lookup override_if_empty=false

somesoni2
Revered Legend

How are you running the search, scheduled or ad-hoc?

0 Karma

reed_kelly
Contributor

I'm running many scheduled searches. In SQL, there is the "on error" capability that let's you avoid taking further action if something broke before.

Many of our searches do something like the following:

|dbxquery connection=xxx query="select * from tablex"
|massage the data
|outputlookup tablex_fast_lookup

If the SQL command dies due to a SQL issue, then we end up writing an empty lookup file. Once the lookup file is empty, dozens of other things break. I suppose I could convert it to a kv-store and merge data to the lookup with a last update timestamp. Then we could purge older data based on the timestamp.

I figured that if I could abort the command, then the lookup file would have stale data, but it would not be empty.

0 Karma

somesoni2
Revered Legend

For just this particular problem (empty lookup when no data), you could do something like this
Updated

 |dbxquery connection=xxx query="select * from tablex"
 |massage the data
 | eval type="new"  | append [ | inputlookup tablex_fast_lookup | eval type="old"]
 | eventstats count(eval(type="new")) as hasData | eval filter=if(hasData>0,"new","old") 
 | where type=filter | fields - type hasData filter
 |outputlookup tablex_fast_lookup

Basically, if base search has no data, re-add the existing data from lookup again.

reed_kelly
Contributor

Taking it one step further, I made a global macro:

[outputlookup_if_data(1)]
args = lookup_file
definition = eval upx_type="new" | append [ |inputlookup $lookup_file$ | eval u
px_type="old"] | eventstats count(eval(upx_type="new")) as upx_hasData | eval u
px_filter=if(upx_hasData>0,"new","old") | where upx_type=upx_filter | fields -
upx_type upx_hasData upx_filter | outputlookup $lookup_file$

reed_kelly
Contributor

For the append, I think you mean "inputlookup". I know what you mean and this works well for that case.
-Thanks

0 Karma

woodcock
Esteemed Legend

This explains how to do it with map but you can just as easily do it with a subsearch:

https://answers.splunk.com/answers/172541/is-it-possible-to-purposely-cause-a-scheduled-sear.html

Also, instead of having the search cause an error with bogus time values, you could just as easily replace the entire search with |noop.

0 Karma

reed_kelly
Contributor

I'd rather not do it with a map, because some of my finishing commands already leverage map. For the "Accept" how would you do it with a subsearch?

0 Karma

woodcock
Esteemed Legend

Or you can do it with a subsearch, like this:

YOUR BASE SEARCH HERE [| noop 
| stats count AS blackout 
| addinfo 
| eval blackout=case((SomeLogic="For Blackout Here"), "YES",
    (OtherLogic="For Blackout Here"), "YES",
    true(),"NO") 
| eval earliestMaybe=if((blackout=="NO"), info_min_time, now()) 
| eval latestMaybe=if((blackout=="NO"), info_max_time, 0) 
| eval search="earliest=" . earliestMaybe . " latest=" . info_max_time]
| YOUR POST SEARCH HERE
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...