Okay, I am sure that I have done something stupid, but I can NOT figure it out!
This search works and returns about 400 events:
tag=failure earliest=@d latest=now
| eval Series="Today"
This search works and returns about 500 events:
tag=failure earliest=-1d@d latest=@d
| eval Series ="Yesterday"
| eval _time = _time + 86400
When I put them together with append - I get 450 events! (I expected to get about 900!)
tag=failure earliest=@d latest=now
| eval Series="Today"
| append maxout=10000 [ search tag=failure earliest=-1d@d latest=@d
| eval Series = "Yesterday"
| eval _time = _time + 86400 ]
What did I do wrong? I have run this against a couple of different Splunk versions, so I feel pretty certain that this is my problem and not Splunk. FWIW, what I really want to do is this:
tag=failure earliest=@d latest=now
| eval Series="Today"
| append maxout=10000 [ search tag=failure earliest=-1d@d latest=@d
| eval Series = "Yesterday"
| eval _time = _time + 86400 ]
| timechart span=15m count by Series
Here is the answer. Thanks to @lbowser_splunk for setting me straight.
tag=failure earliest=-1d@d latest=@d
| eval Series="Yesterday"
| eval _time = _time + 86400
| append [ search tag=failure earliest=@d latest=now
| eval Series = "Today" ]
| timechart fixedrange=f span=30m count by Series
What's different? All this does is swap the outer search and the subsearch.
What was wrong with the original version:
Apparently, the subsearch was not returning events that were outside the time window of the initial search (which was earliest=-1d@d latest=@d
). And of course, once the | eval _time = _time + 86400
executed, most of the events were outside the time window!
Also, I found that the timechart
was showing the original time range, which is the default behavior. Adding the fixedrange=f
told it to only show the range that actually had data.
The bottom line is this: Splunk is aware of the time range of the searches, and that can influence behavior further down the pipeline. In particular, this caused me grief with the subsearch.
Here is the answer. Thanks to @lbowser_splunk for setting me straight.
tag=failure earliest=-1d@d latest=@d
| eval Series="Yesterday"
| eval _time = _time + 86400
| append [ search tag=failure earliest=@d latest=now
| eval Series = "Today" ]
| timechart fixedrange=f span=30m count by Series
What's different? All this does is swap the outer search and the subsearch.
What was wrong with the original version:
Apparently, the subsearch was not returning events that were outside the time window of the initial search (which was earliest=-1d@d latest=@d
). And of course, once the | eval _time = _time + 86400
executed, most of the events were outside the time window!
Also, I found that the timechart
was showing the original time range, which is the default behavior. Adding the fixedrange=f
told it to only show the range that actually had data.
The bottom line is this: Splunk is aware of the time range of the searches, and that can influence behavior further down the pipeline. In particular, this caused me grief with the subsearch.
I was investigating the cause unexpected error has come out in Search.log. I do not know the cause, but I found a workaround. Thank lguinn
BTW, I am using the following as a model:
http://blogs.splunk.com/2012/02/19/compare-two-time-ranges-in-one-report/