Monitoring Splunk

Monitoring log sources that go silent?

dionrivera
Communicator

Hi team. I'm looking for a query/solution that will alert me when a log source is no longer sending logs. For example, I have an index called "linux_prod" which is populated when linux hosts fortheir events. I would like to receive an alert when this index stops receiving events for the past 1 hour.  This happens when SC4S or some other issue on the network have problems. 

Thank you.

Labels (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

View solution in original post

dionrivera
Communicator

Thank you sir. Much appreciated.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you have to list the indexes to monitor and put them in a lookup, called e.g. perimeter.csv containing at least one column (called e.g. index).

then you could run something like this:

| metasearch index=* [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

Ciao.

Giuseppe

0 Karma

dionrivera
Communicator

@gcuselloAny suggestions how I could include the time within the query? I need to look every hour if the event count has changed. 

Grazie!

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

Get Updates on the Splunk Community!

Splunk Mobile: Your Brand-New Home Screen

Meet Your New Mobile Hub  Hello Splunk Community!  Staying connected to your data—no matter where you are—is ...

Introducing Value Insights (Beta): Understand the Business Impact your organization ...

Real progress on your strategic priorities starts with knowing the business outcomes your teams are delivering ...

Enterprise Security (ES) Essentials 8.3 is Now GA — Smarter Detections, Faster ...

As of today, Enterprise Security (ES) Essentials 8.3 is now generally available, helping SOC teams simplify ...