I have a search that tells me if an index hasn't received data from a log on a server. This allows me to monitor the splunk environment and make sure that everything it working as expected. The only problem with the search is when it pertains to application logs that aren't written to everyday. I get a lot of false positives.
I have also created an app that does an ls -al on all the inputs.conf in etc/apps/ and etc/system/local and sends the results to an index (index_check_unix).
Combining the information of these 2 items is where I'm having the trouble. I either want to combine the search and the additional index and only get the servers that are having issues or just having a separate column that has the output of the index_check_unix based on the servers that show up in the report. Below is my search for unix servers I have a separate search for windows:
| tstats latest(_time) AS lastTime WHERE index=* BY host index sourcetype source| eval current=now() | eval age_min=round((current-lastTime)/60,2)
| rangemap field=age_min low=0-720 elevated=721-1440 | search range!=low | search range!=elevated
| stats max(current) AS "Current Time" values(index) AS index values(sourcetype) AS sourcetype values(source) AS Location list(lastTime) AS "Latest Event" list(range) AS Status by host
| convert ctime(*Time) ctime("Latest Event")