All Apps and Add-ons

Is there a way I can ensure the rest of the search does not run when the dbxquery fails in a subsearch?

briancronrath
Contributor

I've been running into some issues where I'm doing a join on data from a subsearch that uses the dbxquery command for a saved search that periodically writes us a lookup file. We ran into some db connection issues where searchheads were intermittently not able to connect to this db and run the search; however, when this occurred it would essentially destroy our lookup file because the search continued to run even though the dbxquery would fail. Is there a way I can ensure the rest of the search does not run when the dbxquery fails in a subsearch?

0 Karma
1 Solution

DalJeanis
Legend

In essence, you'll need to put another step in there.

Put the dbxquery results into a staging file, then only write the results from staging into the lookup if there is something in the staging file.

| inputlookup myrealfile.csv
| eval filenum=0
| inputlookup append=t mystagingfile.csv 
| eval filenum=coalesce(filenum,1)
| eventstats max(filenum) as maxfile
| where filenum=maxfile  
| fields - filenum maxfile
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

...or, if you prefer this style...

| inputlookup mystagingfile.csv 
| appendpipe 
    [ | stats count as newcount 
      | eval rectype=if(newcount>0,"keepme","killme") 
      | inputlookup append=true myrealfile.csv 
      | eventstats max(rectype) as rectype
      | where isnull(newcount) AND rectype="keepme"
      | fields - rectype
    ]
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

View solution in original post

0 Karma

DalJeanis
Legend

In essence, you'll need to put another step in there.

Put the dbxquery results into a staging file, then only write the results from staging into the lookup if there is something in the staging file.

| inputlookup myrealfile.csv
| eval filenum=0
| inputlookup append=t mystagingfile.csv 
| eval filenum=coalesce(filenum,1)
| eventstats max(filenum) as maxfile
| where filenum=maxfile  
| fields - filenum maxfile
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]

...or, if you prefer this style...

| inputlookup mystagingfile.csv 
| appendpipe 
    [ | stats count as newcount 
      | eval rectype=if(newcount>0,"keepme","killme") 
      | inputlookup append=true myrealfile.csv 
      | eventstats max(rectype) as rectype
      | where isnull(newcount) AND rectype="keepme"
      | fields - rectype
    ]
| outputlookup append=f myrealfile.csv
| appendpipe 
    [| where false() 
     | outputlookup append=f mystagingfile.csv 
     ]
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...