Typically Splunk cares more about eating what's generated in real time, than worrying about what isn't coming in. However, we capture a data on things that occur in Splunk. I can see a use case where a customer might want to know if a log is NOT spitting out data, as it might indicated a failure of the application thats generating the log.. (no log = no process running). I can see monitoring the "non-capture" of log data on an exception basis rather than the rule. So what you might want to do is ask splunk: "Let me know any (or specfic) sources where the most recent event isn't in the last 5 minutes -- for example.
| metadata sources | eval
gap_minutes=round((now()-recentTime)/60) | eval currenttime=now() | sort
-recentTime | convert ctime(currenttime) ctime(recentTime)
| fields + currenttime, recentTime,
source, gap_minutes
It says.. Give me metadata about captured sources, calclluate a gap between any sources "recentTime" and NOW in MINUTES and then round to nearest minute (up or down depending on value).., then sort by recentTime decending, then convert the epoch time to human readable, then just display the fields i care about.
One of the great things about splunk is you have access to the language we speak to the engine. So what would we do if i wanted to see "sources that haven't reported any events".. add " | search gap_minutes>5. Then, save it.. schedule it for every 5 minutes and if any events come back.. walla.. email.
One thing I ran in to is that I had negative numbers in my gap. That means i have "future events" -- gotta deal with that, as i didn't know they were there -- likely a timezone issue. That being said, any source that has a positive number means there's a gap between now and the last time we saw an event.
That "| metadata" search command can be used with "sources, hosts, sourcetypes" as well.
... View more