Getting Data In

Controlling dispatch directory growth

mark
Path Finder

Hi,

We have a continual issue in our environment with the $SPLUNK_HOME/var/run/dispatch directory growing out of control – constantly above 2000 directories and decreasing system performance.

There are 2 usecases that seem to cause the biggest issue:
1. Realtime searches that alert frequently. In this case I see that a new result(and directory) is created every 1 -2 minutes. This has the ability to create up hundreds of directories within a few hours. Most of these realtime alerts are already restricted to a 24 hour retention, however this doesn’t help if alerts are triggered all night, then there are easily 500+ directories by the morning for just one search...

  1. Scheduled searches that are setup to executed frequently with a few days retention. We recently had a user setup a search at 5 minute intervals with a 30 day retention… This created a slow growth of 1152 directories over 4 days....

Between these two usecases we often have Splunk exceeding 3000+ directories quite freqently.

I’m curious how other people are managing this?

In some circumstances it makes sense to retain results for 30 days; in the case of a daily search.
It also makes sense for critical monitoring to have frequent alerts. However, a combination of both creates too many directories in dispatch for Splunk to operate efficiently.

Is there a mechanism to enforce job retention to a particular user role? ie 24hours only

Is there any mechanism to alter how the dispatch directory operates? Even sub folders per app or per user would really help in this case…

Mark

Tags (3)

gkanapathy
Splunk Employee
Splunk Employee

You should simply change the retention periods of your saved searches. They are controlled by the ttl or timeout parameter, though depending on how the search is scheduling, there are many places the value may be set or overridden. See the savedsearches.conf and alert_actions.conf files.

As for users, you can use roles to limit the amount of space a user uses, which indirectly should limit the number of jobs they keep around.

mendesjo
Path Finder

Thanks for the answer.. but as someone new to splunk.. my goodness there are a million savedsearches.conf which one?

0 Karma

kamal_jagga
Contributor

Go in to the app which is having maximum searches or least useful. In its local directory, make a limits.conf and update the ttl value.

ttl =
* The time to live (ttl), in seconds, of the cache for the results of a given
subsearch.
* Do not set this below 120 seconds.
* See the definition in the [search] stanza under the “TTL” section for more
details on how the ttl is computed.
* Default: 300 (5 minutes)

https://docs.splunk.com/Documentation/Splunk/7.2.3/Admin/Limitsconf

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...