Monitoring Splunk

Monitoring log sources that go silent?

dionrivera
Path Finder

Hi team. I'm looking for a query/solution that will alert me when a log source is no longer sending logs. For example, I have an index called "linux_prod" which is populated when linux hosts fortheir events. I would like to receive an alert when this index stops receiving events for the past 1 hour.  This happens when SC4S or some other issue on the network have problems. 

Thank you.

Labels (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

View solution in original post

dionrivera
Path Finder

Thank you sir. Much appreciated.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you have to list the indexes to monitor and put them in a lookup, called e.g. perimeter.csv containing at least one column (called e.g. index).

then you could run something like this:

| metasearch index=* [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

Ciao.

Giuseppe

0 Karma

dionrivera
Path Finder

@gcuselloAny suggestions how I could include the time within the query? I need to look every hour if the event count has changed. 

Grazie!

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...