Deployment Architecture
Highlighted

Is it best practice to clear the entire dispatch directory when one of our search peers has reached the maximum space?

Motivator

Hi All, Is it possible to clear entire dispatch directory, as one of the search peer Splunk instance has reached maximum size? We see the below message popping out in the Splunk Web portal.

Search peer splunk01 has the following message: The minimum free disk space (5000MB) reached for /opt/splunk/var/run/splunk/dispatch

Questions:

1) Can we clear the entire dispatch directory using the rm -rf command and while doing this activity should we need to stop the splunk service ?

2) Should we need to use this command for clearing the dispatch directory ? and also should we need to stop the splunk services?

3) Will there be any impact if we delete all the files under this directory and also we have noticed in other indexer peers we have little space left ? is it good to clear the dispatch directory in all search peers at the same time?

We are using Splunk 6.2.1 version with 5 indexer instance, 4 search head, one scheduled search job instance and License manager/deployment instance in an distributed environment.
Kindly guide me on this, as we need to clear this space before Splunk searching fails.

thanks in advance.

Highlighted

Re: Is it best practice to clear the entire dispatch directory when one of our search peers has reached the maximum space?

Legend

First, I would delete files from these directories if they have not been modified for some time period. As a starting point, I would probably delete anything that has not been modified in the last 7 days. I think this is fine for all of the indexers and the search heads, and I don't think you will need to stop or restart.

If you rm -rf everything from the dispatch directory, you will delete any saved search results and may affect the accuracy of alerts. So I would try not to delete anything that has been modified in the last 24 hours.

Highlighted

Re: Is it best practice to clear the entire dispatch directory when one of our search peers has reached the maximum space?

Motivator

Hi lguinn, thanks for the information but when I executed the below command it did not move the file to tmp folder , instead it was throwing the below message not sure why I am getting this message when I had followed these steps.

bash-4.1$ ./splunk cmd splunkd clean-dispatch /tmp -24@h

Could not move /opt/splunk/var/run/splunk/dispatch/scheduler__admin_U0EtQWNjZXNzUHJvdGVjdGlvbg__RMD5203a2beee6037c4b_at_1484110500_46277 to /tmp/scheduler__admin_U0EtQWNjZXNzUHJvdGVjdGlvbg__RMD5203a2beee6037c4b_at_1484110500_46277. Invalid cross-device link
Could not move /opt/splunk/var/run/splunk/dispatch/scheduler__nobody_VEEtZmlyZV9icmlnYWRl__RMD55aaa2df2491d4b84_at_1484110740_48552 to /tmp/scheduler__nobody_VEEtZmlyZV9icmlnYWRl__RMD55aaa2df2491d4b84_at_1484110740_48552. Invalid cross-device link
Could not move /opt/splunk/var/run/splunk/dispatch/scheduler__admin_U0EtQWNjZXNzUHJvdGVjdGlvbg__RMD51b34e21c24086dff_at_1484110500_46280 to /tmp/scheduler__admin_U0EtQWNjZXNzUHJvdGVjdGlvbg__RMD51b34e21c24086dff_at_1484110500_46280. Invalid cross-device link

And also I could see most of the files under dispatch directory are more recent one, only two directory files are older then a day or 2, so is that fine to clean the dispatch directory .

/opt/splunk/var/run/splunk/dispatch

drwx------ 2 splunk splunk 4096 Jan 11 00:35 scheduler__admin_U0EtQXVkaXRBbmREYXRhUHJvdGVjdGlvbg__RMD50309564ea6190f59_at_1484111700_52239
drwx------ 2 splunk splunk 4096 Jan 11 00:35 scheduler__admin_U0EtQXVkaXRBbmREYXRhUHJvdGVjdGlvbg__RMD50309564ea6190f59_at_1484112600_52280
drwx------ 2 splunk splunk 4096 Jan 11 00:45 scheduler__admin_U0EtQXVkaXRBbmREYXRhUHJvdGVjdGlvbg__RMD50309564ea6190f59_at_1484113500_57195
drwx------ 4 splunk splunk 4096 Jan 11 00:40 scheduler__admin_U0EtQXVkaXRBbmREYXRhUHJvdGVjdGlvbg__RMD5299817463118a981_at_1484113200_55048
drwx------ 4 splunk splunk 4096 Jan 11 00:45 **scheduler__admin_U0EtQXVkaXRBbmREYXRhUHJvdGVjdGlvbg__RMD5299817463118a981_at_1484113500_57194
drwx------ 3 splunk splunk 4096 Jan  9 23:59** **scheduler__nobody_VEEtZmlyZV9icmlnYWRl__RMD55aaa2df2491d4b84_at_1484024340_67342
drwx------ 3 splunk splunk 4096 Jan 10 23:59** scheduler__nobody_VEEtZmlyZV9icmlnYWRl__RMD55aaa2df2491d4b84_at_1484110740_48552
drwx------ 4 splunk splunk 4096 Jan 11 00:45 subsearch_scheduler__nobody_U3BsdW5rX2Zvcl9FeGNoYW5nZQ__RMD53fdff49fe21aa77e_at_1484113500_57179_1484113503.1
drwx------ 2 splunk splunk 4096 Jan 11 00:40 scheduler__nobody__r__RMD53dc0a2d72b392d26_at_1484113200_55030
drwx------ 2 splunk splunk 4096 Jan 11 00:45 scheduler__nobody__r__RMD53dc0a2d72b392d26_at_1484113500_57177

Please let me know whether we need to stop and move the file from this folder and how to setup a monitoring to monitor the size of the dispatch directory in splunk using a splunk query.

thanks in advance.

0 Karma
Highlighted

Re: Is it best practice to clear the entire dispatch directory when one of our search peers has reached the maximum space?

Legend

I didn't know there was such a command as

./splunk cmd splunkd clean-dispatch /tmp -24@h

So thanks for that info! But I am not sure that the command does what you think, either...

Also, it looks like a lot of these are generated by scheduled searches, not ad hoc searches. I would be more reticent to delete the results of scheduled searches from the dispatch directory, as sometimes alerts depend on having both current and previous runs' data available.

Instead, I would carefully review the scheduled searches you have running. If there are many of them, you may simply need to increase the disk space available for the dispatch directory to use.

If you need more specific assistance, you may need to open a support ticket, but there may be others in the community that can offer advice as well.

0 Karma