Monitoring Splunk

Monitoring log sources that go silent?

dionrivera
Path Finder

Hi team. I'm looking for a query/solution that will alert me when a log source is no longer sending logs. For example, I have an index called "linux_prod" which is populated when linux hosts fortheir events. I would like to receive an alert when this index stops receiving events for the past 1 hour.  This happens when SC4S or some other issue on the network have problems. 

Thank you.

Labels (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

View solution in original post

dionrivera
Path Finder

Thank you sir. Much appreciated.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you have to list the indexes to monitor and put them in a lookup, called e.g. perimeter.csv containing at least one column (called e.g. index).

then you could run something like this:

| metasearch index=* [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

Ciao.

Giuseppe

0 Karma

dionrivera
Path Finder

@gcuselloAny suggestions how I could include the time within the query? I need to look every hour if the event count has changed. 

Grazie!

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...