I have a search that basically looks like this:
some source | stats earliest(_time) as _time latest(_time) as end by ID | eval duration = end - _time
The goal is to compute transaction durations without involving the (I assume) more expensive transaction command.
Now, this works brilliantly over short time ranges. Over longer ranges I've turned on report acceleration (31 days, we're mostly looking back up to two weeks). This does speed up things, but only by a factor of two. Comparing the non-accelerated search with the accelerated search fetch becomes superquick, but stats.execute_input and stats.execute_output still take lots of time. My guess is that using earliest() together with report acceleration may not be the smartest move because it's not easily streamable. Any thoughts on how to make this accelerate-able?
So the way I understand it is, when you accelerate the search, everything up to the 1st reporting command is schedulted to run every 10 minutes, and creates summaries of an interval defined in auto_summarize.timespan in savedsearches.conf
Find the value of this parameter - if you're doing a search over a month and the span is 1 hour, theres still going to be 24*31*(avg number of rows of original stats command) to process by the stats command in the accelerated report
Just an idea...
In savedsearches.conf the auto_summarize.timespan value is not set at all. Looking at the job inspector, that says auto_summarize.timespan is "None"... it does however say auto_summarize.cron_schedule is every 10 minutes as you suspected. If I were to fiddle with the cron schedule, should the rest still work automagically or would there be more to adapt as well?
has someone come up with an optimisation to this?
I've bumped into exactly the same issue and had come up with the same approach Martin took 2 years ago.