Alerting

Alert query for known hosts that haven't received a specific event in the last 5 mins

guywood13
Path Finder

Hi, I'm after a query that I can alert with which shows if one of my hosts hasn't logged a particular message in the last 5 mins.  I have 4 known hosts and ideally, wouldn't want a query/alert for each.

 

 

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
|stats count by host

 

 

So this gives me a count of that specific event for each of my hosts.  I want to know if one (or more) of these drops to zero in the last 5 mins.  All the hostnames are known so can be written into the query.

Not really got close with this one so some help would be appreciated.  Thanks!

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @guywood13,

as @richgalloway and @codebuilder said, you have to run a simple search that you can find many times in Community.

If you have only 4 hosts you can run something like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| stats count by host
| append [ | makeresults | eval host="host1", count=0 | fields host count ]
| append [ | makeresults | eval host="host2", count=0 | fields host count ]
| append [ | makeresults | eval host="host3", count=0 | fields host count ]
| append [ | makeresults | eval host="host4", count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

If instead you have more hosts, you have to create a lookup (called e.g. perimeter.csv) containing only the hostname of the hosts to monitor and then run a search like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| eval host=lower(host)
| stats count by host
| append [ | inputlookup perimeter.csv | eval host=lower(host), count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

Ciao.

Giuseppe

View solution in original post

gcusello
SplunkTrust
SplunkTrust

Hi @guywood13,

as @richgalloway and @codebuilder said, you have to run a simple search that you can find many times in Community.

If you have only 4 hosts you can run something like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| stats count by host
| append [ | makeresults | eval host="host1", count=0 | fields host count ]
| append [ | makeresults | eval host="host2", count=0 | fields host count ]
| append [ | makeresults | eval host="host3", count=0 | fields host count ]
| append [ | makeresults | eval host="host4", count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

If instead you have more hosts, you have to create a lookup (called e.g. perimeter.csv) containing only the hostname of the hosts to monitor and then run a search like this:

index="index" source="/var/log/log.log "My Specific Message" earliest=-5m latest=now
| eval host=lower(host)
| stats count by host
| append [ | inputlookup perimeter.csv | eval host=lower(host), count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0

Ciao.

Giuseppe

guywood13
Path Finder

Hello @gcusello, this is exactly what I needed!  Grazie 🙂

0 Karma

bray1111
Explorer

As mentioned, save the search as an alert and the threshold would by <1 stats returned would trigger the alert. 

A word of caution about monitoring for negatives or low thresholds.  If your data pipelines get backed up, scheduled searches looking for negatives will see little or no data for your search at search time due to a slow pipeline.  This can make you crazy since the pipelines will catch up and you'll be left wondering why splunk is "fibbing" to you.  The truth is at that moment the scheduled search ran, the alert was valid from the search's perspective but looking at it after the fact, all the data will have been filled in.   Caveat emptor

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it.

https://www.duanewaddle.com/proving-a-negative/

---
If this reply helps you, Karma would be appreciated.
0 Karma

codebuilder
Influencer

There are many ways to do this, but the quick and easy method is to simply run your search, then in the top right click on "Save As" and choose Alert. From there you can give the alert a name, set scheduling, and trigger actions such as email, etc.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...