Splunk Search

Too many search jobs found in the dispatch directory + Keep on extending the job expiry time + Search artifacts not removed as per ttl

splunker12er
Motivator

I see too many search jobs present in the dispatch directory.
Even after completing the jobs the expiry date keep on increasing and not removed from the dispatch folder.
This impact me in creating other search jobs via REST API. ("Job not yet scheduled by server")
Is there any way splunk automatically clears those jobs ?
every time i manually clear the dispatch directory for other jobs to get scheduled.

OS : Linux 2.6.32-504.16.2.el6.x86_64 x86_64
Splunk v6.0.4

jeremiahc4
Builder

I've noticed this behavior on 6.2.6 also. We have a 6 node search head cluster and I see jobs that get stuck for some reason... status=Done, sitting for nearly a month, refresh the view and the expiration moves to the current time... these are not scheduled searches where they triggered an alert action either. These are just users running a search in the UI as near I can tell.

also of note, we have the artifact cleanup script in place for the known issue in 6.2.6 also. Anything non-scheduler artifacts older than 2 hours get removed by the script.

0 Karma

Lucas_K
Motivator

Known issue with that particular version?

0 Karma

splunker12er
Motivator

Its not a known issue ., verified from the docs.
http://docs.splunk.com/Documentation/Splunk/6.0.3/ReleaseNotes/KnownIssues

0 Karma
Get Updates on the Splunk Community!

Join Us at the Builder Bar at .conf24 – Empowering Innovation and Collaboration

What is the Builder Bar? The Builder Bar is more than just a place; it's a hub of creativity, collaboration, ...

Combine Multiline Logs into a Single Event with SOCK - a Guide for Advanced Users

This article is the continuation of the “Combine multiline logs into a single event with SOCK - a step-by-step ...

Everything Community at .conf24!

You may have seen mention of the .conf Community Zone 'round these parts and found yourself wondering what ...