Splunk Search

Calculate response time from starttime and endtime

Explorer

Hi everyone!

Where i work, we have a little problem with the sql server, so i made a java program to test it.
I have a nice log, which contains rows like this:

Sent;2013-07-12 10:00:06.561;3c34c5e2a32c8b1a7b853958207d223f
Sent;2013-07-12 10:00:06.546;7792eff1b42c35e7a208ffec68f63220
Sent;2013-07-12 10:00:06.561;d86dd547438271f2c5bc6c2bceb7e6f3
Sent;2013-07-12 10:00:06.561;3c126ec175261f75fcbc567c5c89751d
Received;2013-07-12 10:00:07.984;d86dd547438271f2c5bc6c2bceb7e6f3
Received;2013-07-12 10:00:07.984;7792eff1b42c35e7a208ffec68f63220
Timeout;2013-07-12 10:00:07.796;3c34c5e2a32c8b1a7b853958207d223f
Timeout;2013-07-12 10:00:07.984;3c126ec175261f75fcbc567c5c89751d

I would like to pair every Sent and Receives row with the last field (like in the forst row: 3c34c5e2a..., it is a unique id)
I just started using splunk, and as i see i'll need these keyword: transaction, field extract (, and chart for seeing these beautiful response times)

I tried something like this:
... | reverse | transaction ";[a-z0-9]*"

As i think every row will be a transaction (or not?), and i need the last string (that's why the regexp). Maybe i'm going to the wrong direction?

I also have some "TIMEOUT;2013-07-12 11:00:43.409;c3926c0fa748e0c5189fd8cc1c433625" rows as well, and they are paired like SENT-TIMEOUT. But i think the SENT-RECEIVED pair duration is enough, maybe i'll figure out how to do it one more time.

Can you guys help me how to do it?

EDIT:
the final goal is to have some kind of table like this:
c3926c0fa748e0c5189fd8cc1c433625 34
as23sf... 435
kj23df... 96

where the strings are the unique IDs to pair the requests and responses, and the number is the endtime-starttime in millisecond/second/buzzlightyear 🙂

Thanks,
Tom

0 Karma
1 Solution

Contributor

you cannot use regular expression directly in combination with transaction, you need to create a field for unique Id and then use transaction command with that field.

something like " | rex";(?UniqueID)[a-z0-9]*" | transaction UniqueID "

View solution in original post

Communicator

first , you need to check if your rex command get correct result

sourcetype=tamasvincze | rex ";(?P<UniqueID>([a-z]|[0-9]){32})" | table UniqueID

if the result is correct, then run :

sourcetype=tamasvincze | rex ";(?P<UniqueID>([a-z]|[0-9]){32})" | transaction UniqueID

please note, the field name is case sensitive

I use your example log , run above command I pasted and I got four groups

Sent;2013-07-12 10:00:06.561;3c34c5e2a32c8b1a7b853958207d223f
Timeout;2013-07-12 10:00:07.796;3c34c5e2a32c8b1a7b853958207d223f
duration=1.235  

Sent;2013-07-12 10:00:06.561;d86dd547438271f2c5bc6c2bceb7e6f3
Received;2013-07-12 10:00:07.984;d86dd547438271f2c5bc6c2bceb7e6f3
duration=1.423  

Sent;2013-07-12 10:00:06.561;3c126ec175261f75fcbc567c5c89751d
Timeout;2013-07-12 10:00:07.984;3c126ec175261f75fcbc567c5c89751d
duration=1.423   

Sent;2013-07-12 10:00:06.546;7792eff1b42c35e7a208ffec68f63220
Received;2013-07-12 10:00:07.984;7792eff1b42c35e7a208ffec68f63220
duration=1.438   

Explorer

Thank you guys, it's working now.
Have a good day!

0 Karma

Legend

If you go the stats route you won't need transaction at all. Where you're going wrong is that you're trying to use BOTH the transaction and the stats option. Choose one.

Legend

The transaction will have only one value for _time because it's only one event, even though it consists of several previously individual events that have been merged.

However transaction will create a field called duration that is exactly what it sounds like - the total duration of the transaction, in other words the time difference between the first and the last event in it.

Explorer

Here is the faulty result:
UniqueID earliest latest
1 3c126ec175261f75fcbc567c5c89751d 1373616006.561 1373616006.561
2 3c34c5e2a32c8b1a7b853958207d223f 1373616006.561 1373616006.561
3 7792eff1b42c35e7a208ffec68f63220 1373616006.546 1373616006.546
4 d86dd547438271f2c5bc6c2bceb7e6f3 1373616006.561 1373616006.561

Sorry, i had to post it in 2 comments, there wasn't enough character space in one.

0 Karma

Explorer

Oh. So first, i tried all the way with a wrong reg expression... Okay, now it looks nicer, but when i'm trying to get the duration calculation, something goes wrong...

The eariest and latest times are the same.
My search string: sourcetype="relsample" | rex ";(?P([a-z0-9]){32})" | transaction UniqueID | stats earliest(time) as earliest latest(_time) as latest by UniqueID
What did i mess up?

Thanks in advance!

0 Karma

Contributor

you cannot use regular expression directly in combination with transaction, you need to create a field for unique Id and then use transaction command with that field.

something like " | rex";(?UniqueID)[a-z0-9]*" | transaction UniqueID "

View solution in original post

Explorer

hi again!
sorry for late answer, i had other things.
I tried this:
source="C:\logpath\log1.txt" | reverse | rex "(?i)([a-z]|[0-9]){32}(?P)" | transaction MessageID

but the problem, that i have only 20 transaction groups, and each one have ~1000 rows (i have ~20000 row in the log file)

what is the problem with this regexp? I tried in in some interpreter, and it should get only the 32 long identifier, and then the transaction should pair them (each sent should be paired with a received, or a timeout). Or do i misunderstand this procedure?

0 Karma

Contributor

The other way without using transaction is by stats command

| stats earliest(time) as earliest latest(time) as latest by UniqueID | eval duration = latest - earliest

this can be used instead of transaction, and also its faster...

0 Karma

Explorer

Hi, you're right, it's even better to use [a-z0-9]{32} (that string always has the length of 32).

But i was wondering: do i need to make transactions? I don't really see it. As i think I'll need field extract and/or kv.

My logic here is:
original data -> get all the sent and received and sent and timeout -> pair them with that ID-> get the duration by substracting starttime from endtime -> at this point i have the ID and the duration -> everyone's happy

Is is hard to do with splunk?
(I am really a newbie, my colleague showed me splunk like weeks ago, so i have absolutely no experience)

0 Karma