I downvoted this post because it does not work due to inability to filter by aws start_time. the date time format is screwy and it collects all events rather than ones in the time range selected. thus i have 50ish historical events every hour instead of the 1 or 2 i am looking for that actually occured in the past hour.
I don't understand your question. You aim to tigger alert if there is no snapshot for a while, do you? If it is, just use search " aws-config-index sourcetype="aws:config" ", and edit conditions in the alert dialog.
The problem I am having is that ALL events come through every time, including from months ago, and are time-stamped by splunk as occurring at time of search. The start_time value is extracted, but as a regular value and the format is very strange ( start_time: 2016-03-19T07:01:05.000Z ) making it difficult to trigger for an event or lack of an event in a defined time range, like last 4 hours. Any tips on how to do this?