I would like to set up alerts which would let me know if no events come in for a particular source or index between a certain time span. For instance, if my firewall stops sending events into Splunk within 5 or 10 minutes, I would like to receive an alert or if a server does not send one in a few hours etc.
What would be the most efficient way for me to accomplish this considering that some of these data sources are fairly large?
Thanks
@jflaherty,
The most efficient way is to use metadata
.
Below gives you delay in minutes for all sources and index. You could further filter based on your requirements and alert for e.g. |where delay > 60
|metadata type=sources index=*|eval delay=round((now()-recentTime)/60,0)
We use the following for forwarders and by days off. You can easily adjust it...
| inputlookup <host list>
| fields host
| join type=left host
[| metadata type=hosts index=_internal
| eval host=lower(host)
| eval _time=recentTime
| sort host, _time
| stats latest(_time) as recentTime by host ]
| eval LAST=strftime(recentTime,"%a %m/%d/%Y-%T %Z(%z)"), DAYS_AGO=round((recentTime-now())/86400,0)
| sort DAYS_AGO
| where DAYS_AGO < 0
@jflaherty,
The most efficient way is to use metadata
.
Below gives you delay in minutes for all sources and index. You could further filter based on your requirements and alert for e.g. |where delay > 60
|metadata type=sources index=*|eval delay=round((now()-recentTime)/60,0)