So, our application logs duration times of logged method calls as ..dT=XXXms..
and I would like to use this for nice splunk graphs.
This works brilliantly if I use a query like this (in advanced charting view)
eventtype="app" dT | timechart avg(dT)
My Problem is, that the application rarely logs absurdly high duration times going up to several years - clearly a bug of the logging framework we are using.
These high dT values sadly totally screw up my nice timechart graphs, and mess with statistics. How can I filter out these values?
I already tried filtering those log statements using a where
clause, but so far this has not worked for me - result set stays empty.
eventtype="app" dT | where dT<3600000 | timechart avg(dT)
Any ideas would be much appreciated!
Hi fgysin,
you can use the filter in your base search like this:
eventtype="app" dT<3600000 | timechart avg(dT)
cheers, MuS
Hi fgysin,
you can use the filter in your base search like this:
eventtype="app" dT<3600000 | timechart avg(dT)
cheers, MuS
Awesome stuff, much appreciated.
eventtype="app" dT | eval dT = tonumber(substr(dT,0,len(dT)-2)) | where dT<3600000 | timechart avg(dT)
Ah I see. So how would I remove the ms? With the rex
command?
ahh I see, your field is like dT=XXXms ... so remove the ms
first and then you can filter 😉
take this run everywhere example:
index=_internal earliest=-2h@h latest=-1h@h kb | where kb<128 | stats count
index=_internal earliest=-2h@h latest=-1h@h kb<128 | stats count
both will return the same count. Is this dT field numeric or a string?
Hmm, that does not work for me... The is graph still plotting average values which lie in the millions and billions.