Monitoring Splunk

Monitoring log sources that go silent?

dionrivera
Communicator

Hi team. I'm looking for a query/solution that will alert me when a log source is no longer sending logs. For example, I have an index called "linux_prod" which is populated when linux hosts fortheir events. I would like to receive an alert when this index stops receiving events for the past 1 hour.  This happens when SC4S or some other issue on the network have problems. 

Thank you.

Labels (2)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

View solution in original post

dionrivera
Communicator

Thank you sir. Much appreciated.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

good for you, see next time!

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you have to list the indexes to monitor and put them in a lookup, called e.g. perimeter.csv containing at least one column (called e.g. index).

then you could run something like this:

| metasearch index=* [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

Ciao.

Giuseppe

0 Karma

dionrivera
Communicator

@gcuselloAny suggestions how I could include the time within the query? I need to look every hour if the event count has changed. 

Grazie!

 

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @dionrivera,

you can run the search using as time the latest hour or insert in the main search "earliest=-h latest=now"

| metasearch index=* earliest=-h latest=now [ | inputlookup perimeter.csv | fields index ]
| stats count BY index
| append [ | inputlookup perimeter.csv | eval count=0 | fields index count ]
| stats sum(count) AS total BY index
| where total=0

and schedule your alert every hour with cron:

0 * * * *

Ciao.

Giuseppe

 

Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...

Design, Compete, Win: Submit Your Best Splunk Dashboards for a .conf26 Pass

Hello Splunkers,  We’re excited to kick off a Splunk Dashboard contest! We know that dashboards are a primary ...

May 2026 Splunk Expert Sessions: Security & Observability

Level Up Your Operations: May 2026 Splunk Expert Sessions Whether you are refining your security posture or ...