I have several alerts that have been firing off an email. Everything has been working for several weeks. However, I noticed over the weekend that search results that should have triggered did not send an email. I don't see any errors in the scheduler.log file.
Any other recommendations for troubleshooting this issue?
Look in the jobs log (Activity->Jobs) to see if the jobs ran then check the alerts log (Activity->Triggered Alerts) to see if they triggered alerts. Then check splunkd.log for "sendemail" to see if there were problems sending mail.
When you are looking at historic data, you may see something that is there now, but may not have made it in to the system in time when when the job ran because of indexing lag
index=searched index host=host_that_did_not_get_triggered other relevant filters
| eval lagSecs = _indextime - _time
| timechart avg(lagSecs)
If you are unable to see any results for this job, then perhaps the search has not run in a while. You did mention you did not see any errors in scheduler log. Is this search actually running when scheduled?
Look in the jobs log (Activity->Jobs) to see if the jobs ran then check the alerts log (Activity->Triggered Alerts) to see if they triggered alerts. Then check splunkd.log for "sendemail" to see if there were problems sending mail.
I can't see past 06-23 - I have no idea why I wouldn't be seeing logs beyond that date.
So,
Activity -> Jobs (Nothing past 6/23 though I've been running this for weeks and it's been working)
Activity -> Triggered Alerts (empty - though I think this is due to older than 6/23)
Splunkd (zero sendemail logs in the file)