Getting Data In

Why is our dispatch directory getting full with strange CSV files?

thezero
Path Finder

Hi,

Our dispatch directory is getting full with some newly created srfiletmp_420128713.csv like files. These files are gradually filling up the space on server. Could you please advise on how can I trace back what is generating the search? Are real-time searches responsible?
What happens if we delete contents of dispatch directory? Can we traceback saved search name from contents of dispatch directory? Please advise.

1 Solution

lguinn2
Legend

A general comment: the $SPLUNK_HOME/var/run/splunk directory stores a lot of things

  • information about apps/bundles that have been installed on this machine
  • information about when scheduled jobs should run, and when they ran in the recent past
  • information about currently logged-in sessions

The size of this directory will grow as the number of searches grows in your environment. Therefore, I would move the contents $SPLUNK_HOME/var/run/splunk (and $SPLUNK_HOME/var/log/splunk) to a different volume. Use symbolic links to maintain the original directory entries.

In particular, $SPLUNK_HOME/var/run/splunk/dispatch contains a directory for each search that is running or has completed. For example, a directory named 1434308943.358 will contain a CSV file of its search results, a search.log with details about the search execution, and other stuff. Using the defaults (which you can override in limits.conf), these directories will be deleted 10 minutes after the search completes - unless the user saves the search results, in which case the results will be deleted after 7 days.

Scheduled searches use a slightly different name for their results. For example scheduler__admin__search__RMD593e0ac5feff458ae_at_1434310020_9 is a results directory for a scheduled search requested by the admin. The at_1434310020 says when the search ran. Unless you change the defaults, each scheduled search keeps only the results from its last 2 runs.

You can see which searches are running in Linux by using the ps command. In the command output, you will see something like

[splunkd pid=31930] search --id=1434310476.366 --maxbuckets=300 --ttl=600 --maxout=500000 --maxtime=8640000 --lookups=1 --reduce_freq=10 --rf=* --user=admin --pro --roles=admin:power:user  

The id is called the "search id" or "sid". This is the name of the directory (or part of the name for scheduled searches.) If you delete the dispatch directory for a running search, it will hang.

When you go to the Splunk UI, you can see the search jobs under "Activity - > Jobs". If you click "Inspect", it will show you the search id for the job. Also, the size field in the Jobs list will tell you how much space is being used by the corresponding directory in dispatch. You can manually delete these jobs from this view, if you have sufficient privileges. You can also go to $SPLUNK_HOME/var/run/splunk/dispatch, and delete the directories for completed searches without consequences.

I don't specifically know what the srfiletmp_xxxxxxxx.csv files are, but I would guess that the numbers refer to a search id. Maybe all this info will help you track it down.

I am sure that Splunk Support will know what these files are for. If you find out, please post an answer or a comment here for the rest of us. Thanks!

View solution in original post

abhullar_splunk
Splunk Employee
Splunk Employee

Here is a blog that explains what all the files are for inside the dispatch directory:

http://blogs.splunk.com/2012/09/10/a-quick-tour-of-a-dispatch-directory/

lguinn2
Legend

A general comment: the $SPLUNK_HOME/var/run/splunk directory stores a lot of things

  • information about apps/bundles that have been installed on this machine
  • information about when scheduled jobs should run, and when they ran in the recent past
  • information about currently logged-in sessions

The size of this directory will grow as the number of searches grows in your environment. Therefore, I would move the contents $SPLUNK_HOME/var/run/splunk (and $SPLUNK_HOME/var/log/splunk) to a different volume. Use symbolic links to maintain the original directory entries.

In particular, $SPLUNK_HOME/var/run/splunk/dispatch contains a directory for each search that is running or has completed. For example, a directory named 1434308943.358 will contain a CSV file of its search results, a search.log with details about the search execution, and other stuff. Using the defaults (which you can override in limits.conf), these directories will be deleted 10 minutes after the search completes - unless the user saves the search results, in which case the results will be deleted after 7 days.

Scheduled searches use a slightly different name for their results. For example scheduler__admin__search__RMD593e0ac5feff458ae_at_1434310020_9 is a results directory for a scheduled search requested by the admin. The at_1434310020 says when the search ran. Unless you change the defaults, each scheduled search keeps only the results from its last 2 runs.

You can see which searches are running in Linux by using the ps command. In the command output, you will see something like

[splunkd pid=31930] search --id=1434310476.366 --maxbuckets=300 --ttl=600 --maxout=500000 --maxtime=8640000 --lookups=1 --reduce_freq=10 --rf=* --user=admin --pro --roles=admin:power:user  

The id is called the "search id" or "sid". This is the name of the directory (or part of the name for scheduled searches.) If you delete the dispatch directory for a running search, it will hang.

When you go to the Splunk UI, you can see the search jobs under "Activity - > Jobs". If you click "Inspect", it will show you the search id for the job. Also, the size field in the Jobs list will tell you how much space is being used by the corresponding directory in dispatch. You can manually delete these jobs from this view, if you have sufficient privileges. You can also go to $SPLUNK_HOME/var/run/splunk/dispatch, and delete the directories for completed searches without consequences.

I don't specifically know what the srfiletmp_xxxxxxxx.csv files are, but I would guess that the numbers refer to a search id. Maybe all this info will help you track it down.

I am sure that Splunk Support will know what these files are for. If you find out, please post an answer or a comment here for the rest of us. Thanks!

nmaiorana
Explorer

Explanation but no answer. What is the solution to this problem? Can we tell Splunk not to store the job?

0 Karma

lguinn2
Legend

You can't tell Splunk not to store the job, the search would break. The solution is to make sure that this directory/volume has sufficient space to store the temporary files for the searches that you run.

0 Karma

thezero
Path Finder

Hi lguinn,
Thanks for your advice.I finally got the root cause.Issue was due to long running seraches over multiple servers.Temporary csv files were storing the result for searches.

Get Updates on the Splunk Community!

Webinar Recap | Revolutionizing IT Operations: The Transformative Power of AI and ML ...

The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the ...

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

ICYMI - Check out the latest releases of Splunk Edge Processor

Splunk is pleased to announce the latest enhancements to Splunk Edge Processor.  HEC Receiver authorization ...