Monitoring Splunk
Highlighted

Excessive diskspace var/run/splunk/dispatch

I've noticed that my Splunk searchhead is using more disk space than expected. Traversing through the /opt/splunk directory structure the majority of the data is associated with a number of rtschedulerxxxxx directories within var/run/splunk/dispatch.

root@core-index-1:/opt/splunk/var/run/splunk/dispatch# du -sh rt_sche*
9.2G   rt_scheduler__nobody_c3BpbmVfb3Bz__RMD53c62647d6192c773_at_1400376360_31462
20M    rt_scheduler__nobody_c3BpbmVfb3Bz__RMD58de258515d430540_at_1400502810_38825
9.3G   rt_scheduler__nobody__operations__RMD53c62647d6192c773_at_1400377140_31693
20M    rt_scheduler__nobody__operations__RMD58de258515d430540_at_1400502810_38824

Within the directory there are hundreds of csv files e.g.

root@core-index-1:/opt/splunk/var/run/splunk/dispatch/rt_scheduler__nobody_c3BpbmVfb3Bz__RMD53c62647d6192c773_at_1400376360_31462# du -sh *
1.5M    rtwindow_1400578535.212.csv.gz
1.5M    rtwindow_1400579355.213.csv.gz
1.5M    rtwindow_1400580828.214.csv.gz
1.5M    rtwindow_1400582774.215.csv.gz
1.5M    rtwindow_1400584602.216.csv.gz
1.5M    rtwindow_1400586048.217.csv.gz
1.5M    rtwindow_1400587830.218.csv.gz
1.5M    rtwindow_1400589574.219.csv.gz
5.6M    search.log
9.6M    search.log.1
9.6M    search.log.2
9.6M    search.log.3
1.5M    srtmpfile_1000166697.csv.gz
1.5M    srtmpfile_1000259418.csv.gz
1.5M    srtmpfile_1000627332.csv.gz
1.5M    srtmpfile_1000715784.csv.gz
1.5M    srtmpfile_1000774912.csv.gz
...
..
.

I've looked through known issues for Splunk 5.0.4 and the limits.conf documentation with respect to real-time search and scheduling but cannot find anything which looks like a candidate. The only information I found is associated with Splunk 4 - http://answers.splunk.com/answers/29551/too-many-search-jobs-found-in-the-dispatch-directory

Would it be possible to get some advice as to what the files are for and how I should go about preventing them from building up.

Highlighted

Re: Excessive diskspace var/run/splunk/dispatch

Legend

If you have real-time alerts running, these are probably the files associated with those alerts.

You might be able to reduce the disk usage by changing the default saved TTL (time to live) in limits.conf, but I am not sure that will work for this problem.

But perhaps a better way is to limit the role that is running these searches. If you just look at the Settings->Searches on your search head, you can probably figure out the user, and by extension, the role. You can then cut back the disk quota for that role.

On the other hand, you may find that this is actually important data that is necessary for your alerts to work properly.

0 Karma
Speak Up for Splunk Careers!

We want to better understand the impact Splunk experience and expertise has has on individuals' careers, and help highlight the growing demand for Splunk skills.