Hi Team,
I need to configure Splunk alert to notify us in case of no logs updated on given server or many servers more than an hour and below are requirements:
1. Totally 40 servers require monitoring
2. Each server has an average 3 log paths
NOTE: Seen existing solution where config is meant for single server host; I need amicable solution to cover all 40 servers.
Please let me know if anything.
Hi @Ganesh1 ,
there are many solution to this request in Community,
you have to create a lookup (called e.g. perimeter.csv) containing the 40 hosts to monitor (at least one column "host") and then run every hour something like this:
| tstats count WHERE index=* BY host
| append [ | inputlookup perimeter.csv | eval count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0
Ciao.
Giuseppe
Hi @Ganesh1 ,
there are many solution to this request in Community,
you have to create a lookup (called e.g. perimeter.csv) containing the 40 hosts to monitor (at least one column "host") and then run every hour something like this:
| tstats count WHERE index=* BY host
| append [ | inputlookup perimeter.csv | eval count=0 | fields host count ]
| stats sum(count) AS total BY host
| where total=0
Ciao.
Giuseppe
Finding something that is not there is not Splunk's strong suit. See this blog entry for a good write-up on it.
https://www.duanewaddle.com/proving-a-negative/