The only thing that I can think of is that your events are expiring between when the alert hits and when you double-check. This should tell you what the oldest event still in the index is. It should be weeks, if not months old but maybe it is hours or days old.
|metadata type=sourcetypes | search sourcetype=log4net
Usually when I have stuff that "tests OK" in an ad-hoc search but fails in a scheduled search it is due to pipeline latency. Check out the values of
_indextime - _time for your events. These should be positive and no more than 300ish.
Since the magnatude is so low, the problem is surely that your forwarders and/or indexers are not using
NTP and have drifted from true. To see if it is your indexers, try this:
| rest /services/server/info | eval updated_t=round(strptime(updated, "%Y-%m-%dT%H:%M:%S%z")) | eval delta_t=now()-updated_t | eval delta=tostring(abs(delta_t), "duration") | table serverName, updated, updated_t, delta, delta_t
If delta is anything other than about 00:00:01 (which is easy to account for when processing a lot of indexers), you have clock skew and are a naughty boy because you should have setup NTP on your indexers.
NOTE: this IS a problem, but it is not the problem that you were asking about.
Your trigger times in the capture show 12:27 to 12:34 but your search shows 1:11 to 1:21. Is it possible that there were no triggered events between 1:11 and 1:21? What if you change your search time frame to the 12:27-12:34?