We are currently using Event 45 to calculate the average load for Outlook: Microsoft KB & Sample Data
What we have for a search is this using Splunk 6:
index=win_desk EventCode=45 sourcetype="WinEventLog:Application" SourceName=Outlook | rex field=_raw "Boot Time \(Milliseconds\)\: (?<BootTime_ms>\d+)" max_match=0 | streamstats sum(BootTime_ms) as Evt_sum_BootTime window=1 | eval Evt_BootTime_sec = Evt_sum_BootTime / 1000 | bucket _time span=1d | stats avg(Evt_BootTime_sec) by _time
When I run this search for the following days (10/1/13 12:00:00.000 am to 10/6/13 12:00:00.000 am) I come up with the following results:
_time avg(Evt_BootTime_sec)
2013-10-01 24.834010
2013-10-02 7.831655
2013-10-03 7.796068
2013-10-04 4.842439
2013-10-05 4.59200
So far it all looks good! The problem seems to be when we adjust the the time frame to 30 days. My results go crazy!
10/1/13 12:00:00.000 AM to 11/1/13 12:00:00.000 AM
_time avg(Evt_BootTime_sec)
2013-10-01 772.931010
2013-10-02 755.928655
2013-10-03 755.893068
2013-10-04 752.939439
2013-10-05 752.689000
2013-10-06 756.884800
2013-10-07 719.525329
2013-10-08 687.182311
As you can see in the data10/1/2013 has jumped from 24 seconds to 12 Minutes by just changing the date range. The goal is to track our progress month by month showing a steady progress downwards in a chart.
What am I doing wrong here?
I am getting the same problem with my Outlook account when I tried to launch time on my outlook, then an error message is generating that is “Outlook launch time query issue”. Can anyone tell me the solution? I have Forgotten Outlook Password then I resolved this issue from this link https://www.outlooktechnicalsupportnumbers.com/
,
The rex is left over from the 5X Splunk servers.
We discovered that log data about outlook a few weeks ago. very cool. For our instance of Splunk which is also version 6 the boot time field for add-ins auto-discovered. Is the value not getting extracted by default for you? Did you find something wrong with it which led you to regex extraction? I'll check to see whether I cam getting similar variation over time
This is because your streamstats
are accumulated over more than one day now. I don't think you need the streamstats
at all. If you want the average daily load calculated over the last X days, you could do this
index=win_desk EventCode=45 sourcetype="WinEventLog:Application" SourceName=Outlook
| rex field=_raw "Boot Time \(Milliseconds\)\: (?<BootTime_ms>\d+)" max_match=0
| eval Evt_BootTime_sec = Evt_sum_BootTime / 1000
| bucket _time span=1d
| stats avg(Evt_BootTime_sec) by _time
Ah, that's a pretty brilliant use of streamstats
But I think that something must be wrong with your event parsing if you are getting multiple BootTimes in a single event.
Looking forward to seeing your final search...
I tried your approached but BootTime is auto discovered as a Multivalue field. Streamstats is doing exactly what is required and summing the time for each record.
The really odd thing about this is it works perfectly against a single indexer where it was originally created. When we execute against our new cluster it is fails with the bad values.
Thanks for the help anyways. I am going to see if Splunk can give us a hand with this and I will post the final search when it is done.