Hello everyone,
I'm having issues keeping my dispatch directory down to a manageable level. What I mean by that is for the past week, every two days I log in to do a manual search and I cannot because the dispatch directory has some 30000 jobs where the warning level is 2000.
So I go in and clean out the dispatch directory, restart splunk and we are back in business. My issue is that sometimes I don't touch splunk for a few weeks, and if splunkd stops, we will lose the alerts that we get for certain situations.
About the alerts, I have about 6 rt scheduled searches that run rt-1m to rt-0m checking in the last minute for a set of alert conditions. Usually they are quiet but sometimes we get many, this is intentional, and of course they are throttled to be reasonable.
I would like to fix the dispatch issue some way in splunk. My other solution is to set a script in windows task scheduler to clear the dispatch directory once per night.
Any help is appreciated!
Hello,
There are some questions.
So when you create the scheduled searches there is a default parameter set
dispatch.ttl =24h etc.. this value you can consider to be lower if you don't need the history any more. It will clear out the dispatch directory faster. The information still exist in splunk so you can search them out later. You may also consider cleaning them up manually if you can't control the job requirements.
When you run real time searches scheduled by the admin itself it keeps running for ever and expires after a long time.
Control the jobs according to your requirements:
_http://docs.splunk.com/Documentation/Splunk/5.0.5/Admin/savedsearchesconf
Take a look at this answer:
_http://answers.splunk.com/answers/105478/too-many-search-jobs-found-in-the-dispatch-directory-found3079-warning-level2000-this-could-negatively-impact-splunks-performance-consider-removing-some-of-the-old-search-jobs
I added dispatch.ttl = 1m. In theory this should clear my dispatch every minute. I'll report on the results. Thanks for the answer!