Monitoring Splunk

Monitoring log sources that go silent?

dionrivera
Communicator

Hi team. I'm looking for a query/solution that will alert me when a log source is no longer sending logs. For example, I have an index called "linux_prod" which is populated when linux hosts fortheir events. I would like to receive an alert when this index stops receiving events for the past 1 hour.  This happens when SC4S or some other issue on the network have problems. 

Thank you.

Labels (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

View solution in original post

dionrivera
Communicator

Thank you sir. Much appreciated.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you have to list the indexes to monitor and put them in a lookup, called e.g. perimeter.csv containing at least one column (called e.g. index).

then you could run something like this:

| metasearch index=* [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

Ciao.

Giuseppe

0 Karma

dionrivera
Communicator

@gcuselloAny suggestions how I could include the time within the query? I need to look every hour if the event count has changed. 

Grazie!

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

Get Updates on the Splunk Community!

Good Sourcetype Naming

When it comes to getting data in, one of the earliest decisions made is what to use as a sourcetype. Often, ...

See your relevant APM services, dashboards, and alerts in one place with the updated ...

As a Splunk Observability user, you have a lot of data you have to manage, prioritize, and troubleshoot on a ...

Splunk App for Anomaly Detection End of Life Announcement

Q: What is happening to the Splunk App for Anomaly Detection?A: Splunk is officially announcing the ...