Splunk Search

How to search a log if a match hasn't been seen in 24 hours?

weddi_eddy
Explorer

I currently have a lookup that contains two columns. Hostnames and Location.  I can use the following formula to search for squirrel in all hostnames in this lookup:

"squirrel" [| inputlookup mylookup.csv | fields MY_Hostname | rename MY_Hostname as host]

What I would like to do is to set up an alert where for each hostname in My_Hostname, Splunk will look for "squirrel". If the Number of Results found is equal to 0 (meaning that the squirrel log was not created) in a 24 hour period, I would like an email sent out with that hostname in the email.

I know I can set it up with all hostnames from the lookup, but the issue I see is that if hostname_1 has "squirrel" and hostname_4 does not, it will be greater than 0.

I effectively want to know if an application is not running and which host it is not running on. The application will generate "squirrel" at least once in a 24 hour period. (If you don't like squirrels, you can insert your animal of choice here).

Labels (1)
0 Karma
1 Solution

bowesmana
SplunkTrust
SplunkTrust

Proving negatives is a common question here. The basic solution is to do this

"squirrel" [| inputlookup mylookup.csv | fields MY_Hostname | rename MY_Hostname as host]
| stats count by host
| append [
  | inputlookup mylookup.csv 
  | fields MY_Hostname 
  | rename MY_Hostname as host
  | eval count=0
]
| stats max(count) as count by host
| where count=0

so, you're just counting the hosts that DO have data, then appending all the hosts with a count  of 0  and then aggregating all the hosts and filtering out those hosts who did have data.

 

View solution in original post

0 Karma

bowesmana
SplunkTrust
SplunkTrust

Proving negatives is a common question here. The basic solution is to do this

"squirrel" [| inputlookup mylookup.csv | fields MY_Hostname | rename MY_Hostname as host]
| stats count by host
| append [
  | inputlookup mylookup.csv 
  | fields MY_Hostname 
  | rename MY_Hostname as host
  | eval count=0
]
| stats max(count) as count by host
| where count=0

so, you're just counting the hosts that DO have data, then appending all the hosts with a count  of 0  and then aggregating all the hosts and filtering out those hosts who did have data.

 

0 Karma

weddi_eddy
Explorer

Thanks so much! Had a few typos at first but this worked as intended!

0 Karma
Get Updates on the Splunk Community!

See just what you’ve been missing | Observability tracks at Splunk University

Looking to sharpen your observability skills so you can better understand how to collect and analyze data from ...

Weezer at .conf25? Say it ain’t so!

Hello Splunkers, The countdown to .conf25 is on-and we've just turned up the volume! We're thrilled to ...

How SC4S Makes Suricata Logs Ingestion Simple

Network security monitoring has become increasingly critical for organizations of all sizes. Splunk has ...