Alerting

Alert query for known hosts that haven't received a specific event in the last 5 mins

guywood13
Path Finder

Hi, I'm after a query that I can alert with which shows if one of my hosts hasn't logged a particular message in the last 5 mins.  I have 4 known hosts and ideally, wouldn't want a query/alert for each.

 

 

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
|stats count by host

 

 

So this gives me a count of that specific event for each of my hosts.  I want to know if one (or more) of these drops to zero in the last 5 mins.  All the hostnames are known so can be written into the query.

Not really got close with this one so some help would be appreciated.  Thanks!

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @guywood13,

as @richgalloway and @codebuilder said, you have to run a simple search that you can find many times in Community.

If you have only 4 hosts you can run something like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| stats count by host
| append [ | makeresults | eval host="host1", count=0 | fields host count ]
| append [ | makeresults | eval host="host2", count=0 | fields host count ]
| append [ | makeresults | eval host="host3", count=0 | fields host count ]
| append [ | makeresults | eval host="host4", count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

If instead you have more hosts, you have to create a lookup (called e.g. perimeter.csv) containing only the hostname of the hosts to monitor and then run a search like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| eval host=lower(host)
| stats count by host
| append [ | inputlookup perimeter.csv | eval host=lower(host), count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

Ciao.

Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @guywood13,

as @richgalloway and @codebuilder said, you have to run a simple search that you can find many times in Community.

If you have only 4 hosts you can run something like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| stats count by host
| append [ | makeresults | eval host="host1", count=0 | fields host count ]
| append [ | makeresults | eval host="host2", count=0 | fields host count ]
| append [ | makeresults | eval host="host3", count=0 | fields host count ]
| append [ | makeresults | eval host="host4", count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

If instead you have more hosts, you have to create a lookup (called e.g. perimeter.csv) containing only the hostname of the hosts to monitor and then run a search like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| eval host=lower(host)
| stats count by host
| append [ | inputlookup perimeter.csv | eval host=lower(host), count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

Ciao.

Giuseppe

guywood13
Path Finder

Hello @gcusello, this is exactly what I needed!  Grazie 🙂

0 Karma

bray1111
Explorer

As mentioned, save the search as an alert and the threshold would by <1 stats returned would trigger the alert. 

A word of caution about monitoring for negatives or low thresholds.  If your data pipelines get backed up, scheduled searches looking for negatives will see little or no data for your search at search time due to a slow pipeline.  This can make you crazy since the pipelines will catch up and you'll be left wondering why splunk is "fibbing" to you.  The truth is at that moment the scheduled search ran, the alert was valid from the search's perspective but looking at it after the fact, all the data will have been filled in.   Caveat emptor

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it.

https://www.duanewaddle.com/proving-a-negative/

---
If this reply helps you, Karma would be appreciated.
0 Karma

codebuilder
SplunkTrust
SplunkTrust

There are many ways to do this, but the quick and easy method is to simply run your search, then in the top right click on "Save As" and choose Alert. From there you can give the alert a name, set scheduling, and trigger actions such as email, etc.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...