We are trying to use accelerated search for a saved search which has an issue with performance - it is taking several minutes to execute. It seems there is a 100k hot bucket event threshold that must be met before Splunk triggers an accelerated search to be performed. Otherwise, the Manager > Report Acceleration Summaries page displays the not enough data to summarize in the Summary Status.
This 100k event threshold is probably valid for most cases, but in our case we just want to cut down the amount of time it takes to retrieve our results - regardless of the number of events in the hot bucket summary range. Is there some configuration point (.conf) where we can control the 100k imposed limit? Storage size is not a concern for us, so this limit seems a little harsh when the execution time is so long.
We'd like to avoid the summary index if possible since there are issues with its reliability (gap support when splunkd is down, etc.) and backfill command overhead.
One of the things you could do is schedule the saved search on a regular interval and use this saved search in the dashboard. Splunk will grab the latest run of the scheduled search. The downfall to this is that data is old, max as the interval on which the scheduled search is run.
This scheduled search increases the performance but decreases the real-timeness of the search and depending on the dataset size it might not be a great alternative.
One of the things you could do is schedule the saved search on a regular interval and use this saved search in the dashboard. Splunk will grab the latest run of the scheduled search. The downfall to this is that data is old, max as the interval on which the scheduled search is run.
This scheduled search increases the performance but decreases the real-timeness of the search and depending on the dataset size it might not be a great alternative.
Good to hear this helped 😉
Brilliant idea - this doesn't always need to be real-time for what I'm using it for - so this periodic scheduled search worked out perfectly. I just had to set the cron schedule to correspond with the interval I was interested in.
Another interesting note - even though the saved search couldn't be accelerated, the summary was continuously being probed to check for the 100k threshold which essentially crippled our server (100% max cpu on 4 cores). Found | summarize action=probe id=...
in the job manager running constantly to check if the accelerated search threshold had been met. For now, I have to disable the acceleration until I can trick it into seeing the 100k events. =0 Splunk please take note: if the search is taking longer than the polling period, don't continually queue it up or you're killing the box.