Ah a bit more complex, but still able to be done. Try a transaction.
Let's start with just a simple one run over the past hour or so:
index=myIndex | transaction ActionId maxspan=30m
In it I'm assuming that the maximum time between the start and ending events in one sequence of ActionId will take less than 30 minutes. Adjust as necessary.
If you run that, do you get your events grouped together byActionId? If not, just reply back with what you see and we can straighten it out for you. If they do, your battle has been won! Well, mostly. 🙂
So if it works and creates the transactions, you should have new fields duration and eventcount . Try
index=myIndex | transaction ActionId maxspan=30m | table ActionId, _time, duration, eventcount
Just to see.
But that's only half your problem. Think for a second: if it takes an average AcctionId 15 minutes to "do its thing", then you can't reasonably find incomplete ActionIds until at least 15 minutes later, right? So, if we assume and hourly search, let's go from -75 minutes to -15 minutes. But, we want the transactions to extend to recent data, not stop 15 minutes ago. So, I was thinking about the usual way to solve this with subsearches, and thought there could be an easier way.
Let's build a search that goes back 75 minutes, create transactions out of -75m to now, but then trim off any that started in the past 15m (900 seconds).
index=myIndex earliest=-75m | transaction ActionId maxspan=30m
| eval trim_time=now()-900 | search _time<trim_time
You can pipe that to the table like above to see what it does, but that should be your search. Please check it!!! I think it'll work, and I think it's working in my test data, but my test data is not like your data!
For your once-per-hour alert, then tell it to alert when eventcount =1.
In fact, if you pipe it to the table, you could have the alert send you the actual items in an email - just click the
option under alert action Email for Inline Table
... View more