Hi,
we have multiple dashboards with about 25 searches each. Each search searches about 600 GB of raw data.
The dashboards should always (and only) display the data of the day before between 1pm and 9pm.
Since those dashboards unsurprisingly took forever to be displayed, I got charged with their acceleration.
At the beginning I thought this was a task, splunk was made for, but I ran into some issues.
Each dashboard accesses similar data, so it should not be a problem to make a summary index out of this data. Unluckily some of the searches need a value in ms, so summarizing via
| (si)stats count by url, cache_hit,
decision, req_runtime
still creates 50% of the data. I guess a summary index out of this data would still be about 200 GB big, which is way too much for fast searches. Not taking into account, that it would put quit some load on the indexers as well.
I could additionally create summaryIndizes for each search, but this seems insane to me because it would mean to have about 75 scheduled searches to run every minute.
The final dashboard could then probably be simply accelerated.
What is the best solution to such high-demanding usecases? Do I really need to create 75 scheduled searches?
... View more