Splunk Search

Too many search jobs found in the dispatch directory + Keep on extending the job expiry time + Search artifacts not removed as per ttl

splunker12er
Motivator

I see too many search jobs present in the dispatch directory.
Even after completing the jobs the expiry date keep on increasing and not removed from the dispatch folder.
This impact me in creating other search jobs via REST API. ("Job not yet scheduled by server")
Is there any way splunk automatically clears those jobs ?
every time i manually clear the dispatch directory for other jobs to get scheduled.

OS : Linux 2.6.32-504.16.2.el6.x86_64 x86_64
Splunk v6.0.4

jeremiahc4
Builder

I've noticed this behavior on 6.2.6 also. We have a 6 node search head cluster and I see jobs that get stuck for some reason... status=Done, sitting for nearly a month, refresh the view and the expiration moves to the current time... these are not scheduled searches where they triggered an alert action either. These are just users running a search in the UI as near I can tell.

also of note, we have the artifact cleanup script in place for the known issue in 6.2.6 also. Anything non-scheduler artifacts older than 2 hours get removed by the script.

0 Karma

Lucas_K
Motivator

Known issue with that particular version?

0 Karma

splunker12er
Motivator

Its not a known issue ., verified from the docs.
http://docs.splunk.com/Documentation/Splunk/6.0.3/ReleaseNotes/KnownIssues

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...