Splunk Search

How to create alarms when count = 0?

ashidhingra
Path Finder

How can i create an alarm when a location goes down? 

index=internal sourcetype=abc
| timechart span=5m count(linecount) AS Count by loc useother=f usenull=f
| sort by _time desc
| table _time loc  | where loc< 10

2022-06-22 02:15:00 0 0 0 102 949 941 967 969 45 33
2022-06-22 02:14:00 0 0 0 143 1167 1139 1146 1195 49 75
2022-06-22 02:13:00 0 0 0 134 874 827 891 876 29 46
2022-06-22 02:12:00 1 0 0 130 770 789 773 736 59 60
Labels (4)
0 Karma
1 Solution

ITWhisperer
SplunkTrust
SplunkTrust

After timechart, loc doesn't exist as a field (you have a field for each value of loc)

index=internal sourcetype=abc
| timechart span=5m count(linecount) AS Count by loc useother=f usenull=f
| sort -_time
| untable _time loc Count
| where Count < 10

The issue with this is that, as it stands, the values of loc will only be those found in the initial search, that is, if locA has been down for the entire period covered by the search, you still won't pick it up this way.

However, if it is down for one of the 5 minute periods, then you will be able to detect it

View solution in original post

ashidhingra
Path Finder

Awesome, you are the best!!!

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @ashidhingra,

if the down period is short (less than the time frame of the search), you can use a search like the one you shared.

Otherwise you need a different approach: if the locations (loc) to monitor are few you can use a search like this:

index=internal sourcetype=abc
| eval loc=lower(loc)
| stats count BY loc
| append [ | makeresults | eval loc=loc1, count=0 | fields loc count ]
| append [ | makeresults | eval loc=loc2, count=0 | fields loc count ]
| append [ | makeresults | eval loc=loc3, count=0 | fields loc count ]
| stats sum(count) AS total BY loc
| where total=0

if instead you have many locs, you have to create a lookup (called e.g. perimeter.csv) containing all the locs to monitor with at least one column (called "loc") and schedule a search like the following:

index=internal sourcetype=abc
| eval loc=lower(loc)
| stats count BY loc
| append [ | inputlookup perimeter.csv | eval loc=lower(loc), count=0 | fields loc count ]
| stats sum(count) AS total BY loc
| where total=0

Ciao.

Giuseppe

ITWhisperer
SplunkTrust
SplunkTrust

After timechart, loc doesn't exist as a field (you have a field for each value of loc)

index=internal sourcetype=abc
| timechart span=5m count(linecount) AS Count by loc useother=f usenull=f
| sort -_time
| untable _time loc Count
| where Count < 10

The issue with this is that, as it stands, the values of loc will only be those found in the initial search, that is, if locA has been down for the entire period covered by the search, you still won't pick it up this way.

However, if it is down for one of the 5 minute periods, then you will be able to detect it

Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...

Index This | How many sevens are there between 1 and 100?

August 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...