I'm trying to find the elapsed time between two events: one containing the string "/makeCreditCardPaymentSD" and the one that follows it.
The transaction is grouped over a field called callid, which is correctly extracted.
The logs from which I'm pulling these events may have thousands of irrelevant events between any two for the same callid, but I'm assuming that doesn't matter.
This is what I came up with for a transaction clause:
| transaction callid startswith="/makeCreditCardPaymentSD" maxevents=2
It works about 3/4 of the time. All the other times it extracts only one event even though there are definitely more events in the transaction. For example, the search with that transaction clause returned this as one of the transactions (IP address redacted):
20180813,12:02:43.644,http-nio-7000-exec-193,INFO ,WebUtilities.getFileNoCache.119,prdvpsivr802-1124346-2018225185936 | FETCH http://###:8080/Postpaid_HostCall/vxml/jsp/makeCreditCardPaymentSD.jsp
If I change maxevents to 3, and change nothing else about the query or time range, I get three events in the transaction for that callid:
I've tried a bunch of variations over keeporphans, keepevicted, maxopentxn, maxopenevents, and so on - nothing helps.
The one thing I've tried that does seem to get the right results is to reverse the incoming events and use endswith instead of startswith:
| reverse | transaction callid endswith="/makeCreditCardPaymentSD" maxevents=2
But then "reverse" seems to be using a huge amount of memory.
Any suggestions on how to fix this?
ETA: I've managed to mitigate the maxevents conflict by setting startswith AND endswith conditions on the transaction, such that a transaction starts with any event containing /makeCreditCardPaymentSD and ends with any event that doesn't contain it:
| transaction callid startswith=eval(if(searchmatch("/makeCreditCardPaymentSD"),true(),0)) endswith=eval(if(searchmatch("/makeCreditCardPaymentSD"),0,true())) maxevents=2 unifyends=true
with the unifyends seemingly necessary to keep other events from elbowing in.
I'm not sure this is capturing all the events I want, though - there are a smaller number of transactions showing up than I expected. I'll keep testing to make sure.
index="yourIndex" callid [index="yourIndex" "/makeCreditCardPaymentSD" | dedup callid | table callid] | stats min(_time) AS startTime max(_time) as endTime latest(callid) as callid by callid | eval duration = endTime-startTime
That should be quicker than a transaction and works by first finding all the callid's for "/makeCreditCardPaymentSD" before evaluating the time between the first and last event for that callid, would that work?
Interesting - my first attempt at this used a subsearch but not so clean as this one - the subsearch limiting the relevant callids is a great idea.
I'm trying to find the duration between the event that contains /makeCreditCardPaymentSD within a given call and the next event after it, not the duration of the entire call.
I will try modifying this and see if I can get what I need.
...and I realize I'm using some ambiguous terminology here - just to be clear.
The transactions I'm looking at represent logs of activity by customers contacting an automated phone system, so they're referred to as "calls".
So a transaction grouping callid is what I mean by "call." Your query appears to calculate the durations of entire calls, not of a pair of events within calls.
That did cross my mind after I posted it - In that case you might need the transaction, unless you have a list of the things that could come after it, and not before it...
transactions are generally quite an "expensive" command, so limiting to calls that have the /makeCreditCardPaymentSD will save a bit of resource!
OK, thank you - unfortunately, there's no easy way of knowing what might come next after the /makeCreditCardPaymentSD event.
Agreed that your subsearch to limit the field of calls I'll be running through "transaction" is a fine idea - the search as written is indeed a hog.
I've made a little bit of progress with the maxevents conundrum and will update the post to reflect.