I have noticed that the rt search dispatch artifacts in ~/var/run/splunk/dispatch are not being cleaned up for this app.
From my understanding a normal realtime search should have the dispatch.ttl setting applied to it but does that apply to scripted inputs that the realtime app uses to create its rt jobs?
We use a dedicated search head for this app and I've tried setting system wide ttl's in savedsearches.conf but they don't work. Note: This sh doesn't run anything else apart from this app.
then a splunk restart was performed.
Yet search artifacts still persist and I need to manually clean the dispatch dir every few hours.
The app isn't supposed to create any dispatch artifacts at all. The expected behavior of a GET to /services/search/jobs/export?q=search your search is that a dispatch artifact won't be created, and that instead the events/results will be streamed back to the consumer.
Can you confirm that you are using the latest version of the app? If you look inside the artifacts, can you tell me what the TTL is set to?
Just had a look my best guess is that the ttl info is in the metadata.csv file. This seems to show that it is getting the 1minute setting from somewhere (not necessarily my setting).
"read : [ admin ], write : [admin]",admin,SplunkRealTimeOutput,60
I believe I have the latest version 1.0.4 beta, build 200335
Thanks for confirming. Based on your reply, something else is fouling things up, because 1) I'm pretty sure their shouldn't be any artifacts anyway and 2) the TTL of one minute should be honored by the dispatch reaper.
What version of Splunk are you running on? Any other settings I should know about, such as indexed real-time search being enabled?
I was using a cron job to clean the dispatch dir clean.
My guess is that the app (or splunk) isn't working properly with search head pool mounted app/dispatch locations. A standalone instance works fine. As soon as you change it to a mounted file system for pooling dispatch starts filling.
Very interesting. I can't say I tested in a SH pooling environment. I'll make a note of this for improvements in the future.
Just wanted to confirm that we have seen the exact same behavior, usually after each indexer restart. We have the RTO running on a dozen indexers, with about 6 RTO stanzas. When the indexer is restarted (usually to reload the RTO...) we will see a huge spike in dispatch jobs. If left alone, we have seen it go up to over 3000 dispatch jobs on each indexer. We implemented a cron job to delete any "rt" files in the dispatch directory after 4 hours. Under normal operation, there is only 1 "rt" job for each RTO stanza.