Monitoring Splunk

Figuring out last host reported data based on lookup

oleg106
Explorer

Hello,

Looking for some advice on a popular topping of non reporting hosts.  Perhaps someone already came across something like this, or has a better way of doing it. 

We have device pairs that report differently, and I am looking for a way to alert if a device stops reporting based on expected reporting  cadence for a particular device.  For example, have a CSV with device name/IP and a column for the expected reporting threshold that can be used to generate an alert if it is exceeded.

Example:

FW1-primary, 2m
FW1-secondary, 4h

So the search can look at the second column, and if it's been more than 4 hours since FW1-secondary sent an event, an alert can be generated.  TIA!

 

 

Labels (1)
0 Karma
1 Solution

ericjorgensenjr
Path Finder

I would recommend doing it like this:

Lookup (demolookup):

 

host,seconds
somehostname,60

 

Your alert search:

 

| metadata type=hosts | table host recentTime | lookup demolookup host | eval threshold=now()-seconds | where recentTime<threshold

 

 

View solution in original post

ericjorgensenjr
Path Finder

I would recommend doing it like this:

Lookup (demolookup):

 

host,seconds
somehostname,60

 

Your alert search:

 

| metadata type=hosts | table host recentTime | lookup demolookup host | eval threshold=now()-seconds | where recentTime<threshold

 

 

Get Updates on the Splunk Community!

What’s New & Next in Splunk SOAR

Security teams today are dealing with more alerts, more tools, and more pressure than ever.  Join us for an ...

Observability Unlocked: Kubernetes Monitoring with Splunk Observability Cloud

 Ready to master Kubernetes and cloud monitoring like the pros? Join Splunk’s Growth Engineering team for an ...

Update Your SOAR Apps for Python 3.13: What Community Developers Need to Know

To Community SOAR App Developers - we're reaching out with an important update regarding Python 3.9's ...