I see too many search jobs present in the dispatch directory.
Even after completing the jobs the expiry date keep on increasing and not removed from the dispatch folder.
This impact me in creating other search jobs via REST API. ("Job not yet scheduled by server")
Is there any way splunk automatically clears those jobs ?
every time i manually clear the dispatch directory for other jobs to get scheduled.
OS : Linux 2.6.32-504.16.2.el6.x86_64 x86_64
I've noticed this behavior on 6.2.6 also. We have a 6 node search head cluster and I see jobs that get stuck for some reason... status=Done, sitting for nearly a month, refresh the view and the expiration moves to the current time... these are not scheduled searches where they triggered an alert action either. These are just users running a search in the UI as near I can tell.
also of note, we have the artifact cleanup script in place for the known issue in 6.2.6 also. Anything non-scheduler artifacts older than 2 hours get removed by the script.