Monitoring Splunk

Figuring out last host reported data based on lookup

oleg106
Explorer

Hello,

Looking for some advice on a popular topping of non reporting hosts.  Perhaps someone already came across something like this, or has a better way of doing it. 

We have device pairs that report differently, and I am looking for a way to alert if a device stops reporting based on expected reporting  cadence for a particular device.  For example, have a CSV with device name/IP and a column for the expected reporting threshold that can be used to generate an alert if it is exceeded.

Example:

FW1-primary, 2m
FW1-secondary, 4h

So the search can look at the second column, and if it's been more than 4 hours since FW1-secondary sent an event, an alert can be generated.  TIA!

 

 

Labels (1)
0 Karma
1 Solution

ericjorgensenjr
Path Finder

I would recommend doing it like this:

Lookup (demolookup):

 

host,seconds
somehostname,60

 

Your alert search:

 

| metadata type=hosts | table host recentTime | lookup demolookup host | eval threshold=now()-seconds | where recentTime<threshold

 

 

View solution in original post

ericjorgensenjr
Path Finder

I would recommend doing it like this:

Lookup (demolookup):

 

host,seconds
somehostname,60

 

Your alert search:

 

| metadata type=hosts | table host recentTime | lookup demolookup host | eval threshold=now()-seconds | where recentTime<threshold

 

 

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...