All Apps and Add-ons

Is there a way I can ensure the rest of the search does not run when the dbxquery fails in a subsearch?

briancronrath
Contributor

I've been running into some issues where I'm doing a join on data from a subsearch that uses the dbxquery command for a saved search that periodically writes us a lookup file. We ran into some db connection issues where searchheads were intermittently not able to connect to this db and run the search; however, when this occurred it would essentially destroy our lookup file because the search continued to run even though the dbxquery would fail. Is there a way I can ensure the rest of the search does not run when the dbxquery fails in a subsearch?

0 Karma
1 Solution

DalJeanis
SplunkTrust
SplunkTrust

In essence, you'll need to put another step in there.

Put the dbxquery results into a staging file, then only write the results from staging into the lookup if there is something in the staging file.

| inputlookup myrealfile.csv
| eval filenum=0
| inputlookup append=t mystagingfile.csv 
| eval filenum=coalesce(filenum,1)
| eventstats max(filenum) as maxfile
| where filenum=maxfile  
| fields - filenum maxfile
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

...or, if you prefer this style...

| inputlookup mystagingfile.csv 
| appendpipe 
    [ | stats count as newcount 
      | eval rectype=if(newcount>0,"keepme","killme") 
      | inputlookup append=true myrealfile.csv 
      | eventstats max(rectype) as rectype
      | where isnull(newcount) AND rectype="keepme"
      | fields - rectype
    ]
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

View solution in original post

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

In essence, you'll need to put another step in there.

Put the dbxquery results into a staging file, then only write the results from staging into the lookup if there is something in the staging file.

| inputlookup myrealfile.csv
| eval filenum=0
| inputlookup append=t mystagingfile.csv 
| eval filenum=coalesce(filenum,1)
| eventstats max(filenum) as maxfile
| where filenum=maxfile  
| fields - filenum maxfile
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

...or, if you prefer this style...

| inputlookup mystagingfile.csv 
| appendpipe 
    [ | stats count as newcount 
      | eval rectype=if(newcount>0,"keepme","killme") 
      | inputlookup append=true myrealfile.csv 
      | eventstats max(rectype) as rectype
      | where isnull(newcount) AND rectype="keepme"
      | fields - rectype
    ]
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]
0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...