Hi,
I'm still new to Splunk and I understand that I can extend search or report lifecycle either using GUI or change the dispatch.ttl when scheduling a report. I want to know what will happen when I have hundreds of searches and reports with extended lifetime (7days or more), will there be any impact to the hardware resources when Splunk holds so much data for these reports and searches?
The search results will be retained on the search head for 7+ days. That means disk space will be consumed and not released until the search expires. The role's disk quota also will be consumed, which may prevent future searches from running.
That makes sense, so is there any query or any way to find out how many MBs these search results are consuming on disk?
There is no direct REST endpoint to query for the current state of quota consumption.
You might be able to dig out something from the _introspection or _metrics indexes but I wouldn't count on too much granularity.
Probably you need to your own TA/scripted input to looking used disk space on $SPLUNK_HOME/var/splunk/dispatch directory?