I'm having issues keeping my dispatch directory down to a manageable level. What I mean by that is for the past week, every two days I log in to do a manual search and I cannot because the dispatch directory has some 30000 jobs where the warning level is 2000.
So I go in and clean out the dispatch directory, restart splunk and we are back in business. My issue is that sometimes I don't touch splunk for a few weeks, and if splunkd stops, we will lose the alerts that we get for certain situations.
About the alerts, I have about 6 rt scheduled searches that run rt-1m to rt-0m checking in the last minute for a set of alert conditions. Usually they are quiet but sometimes we get many, this is intentional, and of course they are throttled to be reasonable.
I would like to fix the dispatch issue some way in splunk. My other solution is to set a script in windows task scheduler to clear the dispatch directory once per night.
Do you actually need to track/monitor the results of the job?
How about the other searches?
So when you create the scheduled searches there is a default parameter set
dispatch.ttl =24h etc.. this value you can consider to be lower if you don't need the history any more. It will clear out the dispatch directory faster. The information still exist in splunk so you can search them out later. You may also consider cleaning them up manually if you can't control the job requirements.
When you run real time searches scheduled by the admin itself it keeps running for ever and expires after a long time.