- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Resource impact when extending search job lifetime
Hi,
I'm still new to Splunk and I understand that I can extend search or report lifecycle either using GUI or change the dispatch.ttl when scheduling a report. I want to know what will happen when I have hundreds of searches and reports with extended lifetime (7days or more), will there be any impact to the hardware resources when Splunk holds so much data for these reports and searches?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


The search results will be retained on the search head for 7+ days. That means disk space will be consumed and not released until the search expires. The role's disk quota also will be consumed, which may prevent future searches from running.
If this reply helps you, Karma would be appreciated.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
That makes sense, so is there any query or any way to find out how many MBs these search results are consuming on disk?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

There is no direct REST endpoint to query for the current state of quota consumption.
You might be able to dig out something from the _introspection or _metrics indexes but I wouldn't count on too much granularity.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Probably you need to your own TA/scripted input to looking used disk space on $SPLUNK_HOME/var/splunk/dispatch directory?
