It's important to understand that the HiddenPostProcess module has a hard-coded, unconfigurable input limit of 10,000 events/results because it has been designed to consume a data cube of events rather than raw events.
So, in your situation, I would rather suggest that you modify the base search to build a data cube that can be consumed by both downstream HiddenPostProcess modules, like so:
base search:
NOT platformtype=MOT CALL_END | eval duration=round(callLength/1000) | bucket _time span=1m | sistats avg(duration) count by appID, _time
first post-process:
| stats count
second post-process:
| stats avg(duration) as avgDuration
third post-process:
| timechart span="1m" avg(duration) by appID
A few remarks:
Because we are now feeding results to the downstream HiddenPostProcess modules instead of events, there is no need to explicitly declare a list of fields to pass.
We are using sistats when building the data cube. This ensures that enough exogenous statistical information will be collected to allow the post-process searches to work without having to adapt them to the data cube. For example, without sistats , if the base search performs a stats count , the post-process will need to run stats sum(count) as count to return the total event count. With sistats , the post-process can just invoke stats count or even stats avg(field) .
We are using bucket to discretize the data cube's _time dimension into 1-minute "buckets", which allows us to significantly reduce the number of results it will yield.
Although this technique should dramatically reduce the number of objects passed to HiddenPostProcess, you still have to ensure that this doesn't exceed 10,000 when searching over large time windows. Otherwise, the post-process modules will only operate over the first 10,000 results it receives and will therefore show incomplete data.
Further reading - Use one search for a whole dashboard
... View more