Reporting
Highlighted

Too many search jobs found in the dispatch directory

Contributor

i'm seeing this message after firing up backfill data in Splunk Deployment Monitor:

Too many search jobs found in the dispatch directory (found=3692, warning level=2000). This could negatively impact Splunk's performance, consider removing some of the old search jobs.

Is this dangerous? Can I manually clear out var/run/splunk/dispatch when it's done?

Tags (2)
Highlighted

Re: Too many search jobs found in the dispatch directory

Splunk Employee
Splunk Employee

If you see this error you can manually clear out any jobs in the dispatch folder. I would probably recommend you start with the older ones. The only downside is that the artifacts of those saved searches which populated the summary index won't be around once you clear the dispatch directory. Since the data you are after is in the summary index, it doesn't matter. Any artifacts you eliminate will be regenerated at the next scheduled run time for a particular saved search.

View solution in original post

Highlighted

Re: Too many search jobs found in the dispatch directory

Explorer

I ran clean-dispatch command and it moves them to old-dispatch directory, do I really need to keep them or can I just delete them?

Highlighted

Re: Too many search jobs found in the dispatch directory

Path Finder

great question ! We have had Splunk for 2 years, I have never had anyone ask me for search results from the old-dispatch-jobs directory. So in my mind that means delete them say after a week. : - )

0 Karma
Highlighted

Re: Too many search jobs found in the dispatch directory

New Member

are there any latest updates as to how to clean up those extra jobs running and giving errors ?

0 Karma
Highlighted

Re: Too many search jobs found in the dispatch directory

Champion

Conversely, is there some way to raise the threshold if you actually running that many searches?

0 Karma
Highlighted

Re: Too many search jobs found in the dispatch directory

Explorer

To be honest, I would expect splunk to clean these out. It seems like it should be an automatic house keeping task. We are starting to see this quite regularly in our deployment.

Highlighted

Re: Too many search jobs found in the dispatch directory

Splunk Employee
Splunk Employee

To add to jbsplunk's answer:

The number of directories relating to the search artifacts in the Dispatch directory can potentially affect search performance since we have to scan each of the directories to determine if the artifacts are present or not.

The UI warning message about Dispatch directories being > 2K is new to 4.2.3. There isn’t any hard limit of 2K that is impacting anything as the warning is what was implemented as a best practice to start to review search jobs in general eg. is there a scheduled search that has an excessively long ttl?

You can change that 2K to a higher number so the warning takes longer to display via the

[search] stanza in your
$SPLUNK_HOME/etc/system/local/limits.conf

From the $SPLUNK_HOME/etc/system/README/limits.conf.spec:

[search]
dispatch_dir_warning_size = <int>

* The number of jobs in the dispatch directory when to issue a bulletin message warning that performance could be impacted
* Defaults to 2000

The appropriate number of Dispatch directories that should be set before the performance is impacted would vary per environment as it would depend on variables such as the volume and type of searches being run, what are the ttl etc.

If you find subdirectories in $SPLUNK_HOME/var/run/splunk/dispatch beyond 24hrs of their last modtime AND the subdirectory does not contain BOTH info.csv and status.csv files, that is considered a failed search job and that subdirectory can be safely removed. We expect this should be automatically performed by the dispatch reaper starting in 4.3

In the meanwhile outside of your own cron/scripting, there is an option where you can move subdirectories based on a timeline you have determined is acceptable out of the Dispatch directory .

Below is the usage information.
Use this command to move jobs whose last modification time is earlier than the specified time from the dispatch directory to the specified destination directory.

usage: $SPLUNK_HOME/bin/splunk cmd splunkd clean-dispatch {destination directory where to move jobs} {latest job mod time}

The destination directory must be on the same partition/filesystem as the dispatch directory.

example: splunk cmd splunkd clean-dispatch /opt/splunk/old-dispatch-jobs/ -1month
example: splunk cmd splunkd clean-dispatch /opt/splunk/old-dispatch-jobs/ -10d@d
example: splunk cmd splunkd clean-dispatch /opt/splunk/old-dispatch-jobs/ 2011-06-01T12:34:56.000-07:00

There are future enhancements to manage search job cleanup.

Highlighted

Re: Too many search jobs found in the dispatch directory

Builder

I had the same problem. I ran the clean-dispatch command and for a few results got the following:

Could not move /opt/splunk/var/run/splunk/dispatch/schedulercmerchantsearchSW50ZXJhY3QgU3VzcGljaW91cyBTb3VyY2UgSVAgQWxlcnQat13391307002aeaae3b6e48c487 to /space/splunktmp/schedulercmerchantsearchSW50ZXJhY3QgU3VzcGljaW91cyBTb3VyY2UgSVAgQWxlcnQat13391307002aeaae3b6e48c487. Invalid cross-device link

0 Karma
Highlighted

Re: Too many search jobs found in the dispatch directory

Splunk Employee
Splunk Employee

I updated the answer; for the current limitations of clean-dispatch, destdir must be on the same file system as the dispatch dir.

0 Karma