How can I proactively monitor my Splunk indexes to make sure they are still indexing? I have an SNMP monitoring application, is there a way to setup an SNMP alert/monitor so if Splunk stops indexing it sends me an alert?
Or, is there a way to do this directly within Splunk?
thanks, glh
1) For a simple User Interface answer -- login as an admin user, go to the Search app, go to "Status", then to "Indexing Activity". it'll show things like recent indexing activity over time, and top sourcetypes
2) For a more complicated hands-on approach, again login as an admin, go to the Search app, to go "Views" , then to "Advanced Charting". Then run this search
index="_internal" source=*metrics.log* group="per_sourcetype_thruput" | timechart sum(kb) by series
(and you'll probably want to set the 'chart type' to 'line'. Or set it to 'column' and then set 'Stack mode' to 'stacked'. )
As far as the alerting/monitoring, with those search results in front of you, save that search as a saved search (Actions > Save Search), then click 'schedule options' to open up a bunch more fields. Among other things you can have it send out email, or have it run a script if/when a custom condition matches on the search results. So although Splunk doesnt have any specific SNMP, you can hook up a script that does.