I have a splunk query that retrieves one hour worth of data for one day of the week over four weeks. This week's time change from daylight saving to standard time has caused expected results from the timewrap command. The time offset from previous weeks is -0500 and this week is -0600. After running the timewrap command, the time for the previous weeks is one hour behind the current. The first thing I find this odd is the time is one hour behind current. It should be the other way around. The second thing is it different at all, since I am only querying for one hour.
Search:
host=hosta earliest=-4w@w latest=@m date_wday=monday date_hour=11 | bucket _time span=1m | stats count as total by _time | timewrap w
Results before time change:
_time, total_latest_week, total_1week_before, total_2weeks_before, total_3weeks_before, total_4weeks_before
2015-11-02 11:00:00,1009,1024,1003,784,1032
Now that we changed to standard time (-0600) from daylight savings (-0500), the results show:
_time, total_latest_week, total_1week_before, total_2weeks_before, total_3weeks_before, total_4weeks_before
2015-11-02 10:00:00,,1024,1003,784,1032
...
2015-11-02 11:00:00,1009,,,,
...
The issue is likely that the timewrap script ( etc/apps/timewrap/bin/timewrap.py
) is adding 7*86400 at a time to historic times, rather than going by day boundaries and adding hours. That would explain your question - date_hour searches the 11:00 hour, but that search is absolute time that day. Adding n*7*86400, which then crosses the timezone boundary, will end up as 10:00 due to the "extra" hour being added by doing off Daylight time.
This should be able to be fixed in search by identifying data on the other side of the boundary and artificially adding one hour to anything on the other side of this border (using a US Central split point of 11/1 7a UTC = 11/1 2a CDT = 11/1 1a CST):
host=hosta earliest=-4w@w latest=@m date_wday=monday date_hour=11 | bucket _time span=1m | stats count as total by _time | eval _time=if(_time<=1446361200, _time+3600, _time) | timewrap w
The issue is likely that the timewrap script ( etc/apps/timewrap/bin/timewrap.py
) is adding 7*86400 at a time to historic times, rather than going by day boundaries and adding hours. That would explain your question - date_hour searches the 11:00 hour, but that search is absolute time that day. Adding n*7*86400, which then crosses the timezone boundary, will end up as 10:00 due to the "extra" hour being added by doing off Daylight time.
This should be able to be fixed in search by identifying data on the other side of the boundary and artificially adding one hour to anything on the other side of this border (using a US Central split point of 11/1 7a UTC = 11/1 2a CDT = 11/1 1a CST):
host=hosta earliest=-4w@w latest=@m date_wday=monday date_hour=11 | bucket _time span=1m | stats count as total by _time | eval _time=if(_time<=1446361200, _time+3600, _time) | timewrap w
I tried out your suggestion and it worked perfectly. I did want to try and find a more permanent solution. What are your thoughts about using the query below?
host=hosta earliest=-4w@w latest=@m date_wday=monday date_hour=11 | bucket
_time span=1m | stats count as total by _time | eval nowTZmin = strftime(time(),"%Ez") | eval eventTZmin = strftime(_time,"%Ez") | eval _time= _time + ((nowTZmin-eventTZmin)*-60) | fields - nowTZmin, eventTZmin | timewrap w
That looks like it would work well - I assume you've tested it. Perhaps (eventTZmin-nowTZmin)*60
would make it easier to read