I have an index of events, each of which has an
exit timestamp where
_time is associated to the former.
My first search is very simple and shows the
enter daily frequency:
index=my_index | timechart span=1d count as EnterCount
This produces the following table:
_time,EnterCount 2016-08-08,0 2016-08-09,16 2016-08-10,20 2016-08-11,23 2016-08-12,13 2016-08-13,2 2016-08-14,0 2016-08-15,0 2016-08-16,11
I then have another search that shows the
exit daily frequency:
index=my_index | eval _time = strptime(Exit, "%Y-%m-%d %H:%M:%S") | timechart span=1d count as ExitCount
This produces the following table:
_time,ExitCount 2016-08-08,1 2016-08-09,4 2016-08-10,9 2016-08-11,16 2016-08-12,4 2016-08-13,0 2016-08-14,0 2016-08-15,0 2016-08-16,4
In order to chart this relationship between
exit, I use
appendcols to put them together:
index=my_index | timechart span=1d count as EnterCount | appendcols [ search index=my_index | eval _time = strptime(Exit, "%Y-%m-%d %H:%M:%S") | timechart span=1d count as ExitCount ]
The resulting table, however, displays incorrect values for
ExitCount by bringing the results forward by 3 days:
_time,EnterCount,ExitCount 2016-08-08,0,16 2016-08-09,16,4 2016-08-10,20,0 2016-08-11,23,0 2016-08-12,13,0 2016-08-13,2,4 2016-08-14,0,10 2016-08-15,0,13 2016-08-16,11,9
Why does this happen, and what can I do to ensure that the correct association is being made?
Thank you and best regards,
If you have same date range why would you want to do appendcols? Please provide context.
In any case if your time for Exit search is shifted by 3 days you can either do
or else also try including earliest and latest dates directly in base search for both correlated search.
Thanks for the response!
appendcols because I am putting two different metrics into the same table. Since they both have the same range I can easily do this, but for some reason they are being matched incorrectly.
@andrewtobec...Did you try any of the options provided here? Since you had told time shifting by 3 days, I had suggested correcting time in inner query by 3 days... There are two other answers without use of appendcols, can you try them and confirm?
Thanks for the follow-up. I have found a working solution and updated with the information. Thank you for your support!
Do it like this:
index=my_index | bin _time span=1d | eval exit_time = strptime(Exit, "%Y-%m-%d %H:%M:%S") | bin exit_time span=1d | multireport [| stats count AS EnterCount by _time] [| stats count AS ExitCount by exit_time | rename exit_time AS _time] | stats values(*) AS * BY _time
This part produces random test data, with only the time and exittime mattering.
| gentimes start="01/25/2017:23:00:00" end="01/27/2017:01:00:00" increment=23m | append[| gentimes start="01/26/2017:03:00:00" end="01/26/2017:21:00:00" increment=47m ] | append[| gentimes start="01/26/2017:01:17:00" end="01/26/2017:23:18:00" increment=21m ] | eval _time = starttime | table _time | eval rand1=random() | eval rand2=substr(rand1,1,5) | eval exit_time = _time + tonumber(rand2) | table _time exit_time
This part separates the time and exittime into records by type, and reports on the counts
| eval Times ="enter="._time." exit=".exit_time | table Times | makemv Times | mvexpand Times | makemv delim="=" Times | eval type=mvindex(Times,0),_time=mvindex(Times,1) | table _time, type | bin _time span=1d | chart count over _time by type
Resulting in this -
_time enter exit 1/25/2017 3 0 1/26/2017 149 85 1/27/2017 2 69
In one of my other questions I was given the suggestion by @rjthibod to work with the
append functions (I had since found a solution using
join which I was told isn't the most efficient way of doing things). The following code works perfectly:
index=my_index | chart count as EnterCount over _time span=1d | append [ search index=my_index | eval _time = strptime(Exit, "%Y-%m-%d %H:%M:%S") | chart count as ExitCount over _time span=1d ] ] | timechart span=1d latest(*) as * | fillnull value=0
Thanks to @DalJeanis @woodcock and @niketnilay for their valuable input, I very much appreciate it.
Our solutions also avoid using
append for the same reason that you should avoid
join: subsearch limits. Your
append-based solution will break down with appended results > 50Kish.