Alerting

Is it possible to create a single alert that triggers is event count is <1 on a per-host basis?

ckillg
Path Finder

Is it possible to create a single alert that triggers is event count is <1 on a per-host basis?

e.g. if I search index=network-devices and set the alert to trigger if event count is <1 in a 2 minute period, the alert would never unless ALL of my hosts were down.

Do I have to create an alert for every host? If so, is there a quick way to do this?

Edit: I don't think throttling works for this.

0 Karma
1 Solution

Richfez
SplunkTrust
SplunkTrust

The obvious answer - searching where count<1 - isn't easy to make work because you have to have Splunk keep lookups of what IS supposed to be there and compare. That's a bit of a tricker problem than you think (though it IS able to be solved!)

Luckily there's another, better way using Splunk's metadata, or the data it keeps about the data it's collecting. We ask Splunk's metadata for hosts in that index, create a new field called "last_contact" as the time when the host was last "seen" by Splunk as seconds ago, then search for when last_contact is older than 120 seconds ago. The rest of the command just makes a pretty list of the items it finds so when you email yourself the alert, it'll be useful and nice.

| metadata index=network-devices type=hosts | eval last_contact=now()-lastTime | where last_contact>120 | sort - last_contact | convert ctime(lastTime) | fields host,last_contact,lastTime

Give that a try, it should do what you need!

One note, this will return any hosts that haven't checked in in the past 2 minutes, including old, decomissioned hosts. To get around that (in reasonably simple cases) you may have to search the output and get rid of a few hosts.

| metadata index=network-devices type=hosts | search host!="mydeadhost1" host!="mydeadhost2" | eval last_contact=now()-lastTime | where last_contact>120 | sort - last_contact | convert ctime(lastTime) | fields host,last_contact,lastTime

View solution in original post

Richfez
SplunkTrust
SplunkTrust

The obvious answer - searching where count<1 - isn't easy to make work because you have to have Splunk keep lookups of what IS supposed to be there and compare. That's a bit of a tricker problem than you think (though it IS able to be solved!)

Luckily there's another, better way using Splunk's metadata, or the data it keeps about the data it's collecting. We ask Splunk's metadata for hosts in that index, create a new field called "last_contact" as the time when the host was last "seen" by Splunk as seconds ago, then search for when last_contact is older than 120 seconds ago. The rest of the command just makes a pretty list of the items it finds so when you email yourself the alert, it'll be useful and nice.

| metadata index=network-devices type=hosts | eval last_contact=now()-lastTime | where last_contact>120 | sort - last_contact | convert ctime(lastTime) | fields host,last_contact,lastTime

Give that a try, it should do what you need!

One note, this will return any hosts that haven't checked in in the past 2 minutes, including old, decomissioned hosts. To get around that (in reasonably simple cases) you may have to search the output and get rid of a few hosts.

| metadata index=network-devices type=hosts | search host!="mydeadhost1" host!="mydeadhost2" | eval last_contact=now()-lastTime | where last_contact>120 | sort - last_contact | convert ctime(lastTime) | fields host,last_contact,lastTime

ckillg
Path Finder

This is perfect.

Thank you!

0 Karma
Get Updates on the Splunk Community!

Routing Data to Different Splunk Indexes in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. The OpenTelemetry project is the second largest ...

Getting Started with AIOps: Event Correlation Basics and Alert Storm Detection in ...

Getting Started with AIOps:Event Correlation Basics and Alert Storm Detection in Splunk IT Service ...

Register to Attend BSides SPL 2022 - It's all Happening October 18!

Join like-minded individuals for technical sessions on everything Splunk!  This is a community-led and run ...