Hi,
we are trying to calculate for how long is the Value greater than 90%. We want to get the first time when the value is greater than 90% and for how long was it over 90% before it went down.
Thanks
02/25/2019_14:12:37.949_-0500 collection=CPU object=Processor counter="%_Processor_Time" instance=1 Value=98.396306059935881194
Like this:
|makeresults | eval Value = "80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 99 98 97 96 95 94 93 92 91 90 89 88 87 86 85 84 83 82 82 80"
| makemv Value
| mvexpand Value
| streamstats count AS _serial
| eval _time = _time - _serial
| rename COMMENT AS "Everything above generates sample event data; everything below is your solution"
| reverse
| streamstats current=f count reset_before="(Value<90)"
| reverse
| streamstats count(eval(count=0)) AS sessionID
| where count>0
| stats range(_time) AS duration first(_time) AS last_time last(_time) AS first_time BY sessionID
Like this:
|makeresults | eval Value = "80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 99 98 97 96 95 94 93 92 91 90 89 88 87 86 85 84 83 82 82 80"
| makemv Value
| mvexpand Value
| streamstats count AS _serial
| eval _time = _time - _serial
| rename COMMENT AS "Everything above generates sample event data; everything below is your solution"
| reverse
| streamstats current=f count reset_before="(Value<90)"
| reverse
| streamstats count(eval(count=0)) AS sessionID
| where count>0
| stats range(_time) AS duration first(_time) AS last_time last(_time) AS first_time BY sessionID
Hi @woodcock
Thank you for your help. It worked. Is there anyway to track a value to see if it keeps on increasing. We are trying to find if there are any memory leaks. For which we have to check if the value keeps increasing for a certain period of time.
Thanks,
Om
Ask a fresh question. Be sure to give sample raw event data.
I think this is one of the few cases where I'd actually look into the transaction
command.
Something like:
<your base search> | transaction startswith=Value>90 endswith=Value<=90 | stats values(collection) values(instance) values(duration) by _time
should work for you. Look at the docs for transaction, there are more advanced paramters that might help you as well.
Hth,
-Kai.
Actually, any use case that can be done in any other way should not use transaction
command. See my solution for another way. Avoid transaction at all costs. It does not scale for production searches.
hi @knielsen
Thank you for the help, the above query is working perfectly when I specify for a single host. Can you please help when dealing with multiple hosts. When I tried with multiple hosts the query is comparing two events of two different hosts.
Shown below are the events which the query is comparing which belong to different hosts.
02/26/2019_08:24:37.417_-0500 collection=CPU object=Processor counter="%_Processor_Time" instance=_Total Value=98.980411764203140024
02/26/2019_08:24:37.527_-0500 collection=CPU object=Processor counter="%_Processor_Time" instance=_Total Value=0.703194212318492
host = host1 host = host2 index = perfmon source = Perfmon:CPU sourcetype = Perfmon:CPU
Thanks
I rarely if ever use transaction myself, we don't deal with a lot of transaction based logs. So I am learning myself here. 🙂 From the docs I'd say you just add "host" to the transaction command.
"One field or more field names. The events are grouped into transactions based on the values of this field."
So <your base search> | transaction host startswith=Value>90 endswith=Value<=90 | stats ...
See how that works.
it worked.
Thank you very much .