A single search driving multiple post-processing panels would be ludicrously useful for me, and apparently it can be done according to http://www.splunk.com/base/Documentation/4.1.5/Developer/FormSearchPostProcess
However, in my previous question http://answers.splunk.com/questions/4933/is-it-possible-to-reuse-the-same-raw-search-results-multipl... gkanapathy mentioned that at least for v4.1.4, only up to 10,000 results will be passed to post processing searches. This takes the feature down from ludicrously useful to marginally useful for me, as I will nearly always have more than 10k results (think dashboards running searches on entire apache access logs, to provide multiple access stats panels like this from a single search: http://goaccess.prosoftcorp.com/images/goaccess_screenshot1M-03L.png (please exuse me if this link is not allowed - moderators feel free to remove if necessary. I'm actually fighting to get my devs to use Splunk instead of the aforementioned tool, so this is something quite important to look at)).
Alternatively, if anyone knows another way of achieving what I'm trying to do, please let me know.
Cheers,
Glenn
sideview is right. Although it can be confusing summarizing data in the base search and keeping the other searches looking good, it turns out to be pretty easy.
For example, I've got a dashboard that shows page request times for selected URLs on our website for various browsers. The dashboard shows a line chart histogram of avg request time by browser, which uses a logarithmic bucketization for _time, and then a few charts for 95th, 75th, and 50th percentile request times by browser. I was worried that my stats function would lose the time info, or that my percentile calculations would be skewed, but not so.
Here is the base search:
sourcetype=access_combined_wcookie Browser=*
| bucket _time span=1s
| stats perc95(request_time) as perc95
perc75(request_time) as perc75
perc50(request_time) as perc50
avg(request_time) AS avg
by Browser _time
Note that I bucket _time on 1s intervals, because I want to keep 1s resolution for later functionality (this could be changed based on time picker ranges but not there yet). This is also important because if you have sub-second timestamp resolution you will have a lot more output from stats than you want/need to pass to the post-processes. Now the stats function simply gets my percentile calculations as summaries over each second, by browser.
The post-process searches are easy:
Request Time Histogram
bin avg span=1.2log1.6
| stats count(eval(Browser="Chrome")) AS Chrome
count(eval(Browser="Firefox")) AS Firefox
count(eval(Browser="MSIE")) AS MSIE
count(eval(Browser="Safari")) AS Safari
by avg
95th Percentile Chart
timechart avg(perc95) AS "RequestTime" by Browser
Splunk has a distinction between 'events' and 'results' and although it can be hard to follow it is very important here.
That is because the infamous 10,000 row limit on postProcess searches used to apply to both result sets that were 'events' as well as result sets that were 'results', but as of 4.1.4 or 4.1.5 Splunk changed it so it ONLY applies to the cases where your rows are untransformed events.
So as long as you're following best practice for postProcess and compressing your 'base search' with a stats count by foo bar baz
AND the statistics in your postProcess searches are then accounting for the cases where count is greater than 1, there is no limit and you'll be fine.
For more info, go to launcher, install the app from splunkbase "UI Examples for 4.1", and check out the example view "using postprocess on dashboards".
Have you considered summary indexing? It sounds like the searches you describe are prime candidates for summarization.