We found that the search job size becomes extremely large during searches. My Splunk instance is a newly installed testing lab with the following limits.conf only.
Anyone else have any idea on this situation?
/opt/splunk/etc/system/local/limits.conf
[search]
Hi @Alan_Chan
The issue of an extremely large search job size can be caused by several factors. To troubleshoot, you should first check the search query that's causing the large job size.
| rest /services/search/jobs
| search dispatchState="DONE" AND isFinalized=0
| sort - runDuration
| table sid, label, runDuration, scanCount, resultCount, diskUsage
| rename label as "Search Query"
This SPL will list the recent search jobs, sorted by their run duration, and provide details such as the search query, scan count, result count, and disk usage.
The limits.conf you've provided only contains one setting: read_final_results_from_timeliner = 1. This setting is related to how Splunk reads final results, but it doesn't directly explain the large search job size.
To mitigate large search job sizes, consider optimising your search queries to reduce the amount of data being processed and returned.
Use | stats or other transforming commands early in your search to reduce data volume.
Limit the time range of your searches. - Avoid using * or overly broad field names in your searches.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing