Archive2

Can you change the disk space limit for $SPLUNK_HOME/var/run?

Super Champion

Does anyone know what disk space requirements are in Splunk 4.1? I have a separate partition for $SPLUNK_HOME/var/run, which is just under 2 Gb total. I've never had a problem with this in 3.4.x or 4.0.x. Apparently in 4.1 this free space in this partition is also checked now. Does anyone know if this value can be changed?

The following message shows up whenever I try to run a search:

The minimum free disk space (2048MB) reached for /opt/splunk/var/run/splunk/dispatch. The search was not run.

My server.conf contains:

[diskUsage]
minFreeSpace = 1000

Note: This is on a test upgrade splunk instance that has less disk space, which is why I lowered this limit to 1000Mb. (I've made this too low before on our production box and paid the price for doing so! So I don't reccomend this on a production box. On our production splunk instance, we use minFreeSpace=5000.)

The 2G partition for $SPLUNK_HOME/var/run is the same size on both my 4.1 test upgrade, a well as on our production instance, hence I'm looking for options. Is there a separate setting to change this limit?

Tags (1)
1 Solution

Splunk Employee
Splunk Employee

The minFreeSpace setting in the [diskUsage] stanza goes in server.conf, not limits.conf. You should get better results with it there.

However, I really don't recommend tweaking this unless you are VERY CERTAIN about what you're doing. Searches themselves can use pretty significant amounts of temporary space, for example for sorting, so I would counterecommend changing the max size of your indexes, by setting maxTotalDataSizeMB in indexes.conf on an index by index basis.

View solution in original post

Splunk Employee
Splunk Employee

The minFreeSpace setting in the [diskUsage] stanza goes in server.conf, not limits.conf. You should get better results with it there.

However, I really don't recommend tweaking this unless you are VERY CERTAIN about what you're doing. Searches themselves can use pretty significant amounts of temporary space, for example for sorting, so I would counterecommend changing the max size of your indexes, by setting maxTotalDataSizeMB in indexes.conf on an index by index basis.

View solution in original post

Super Champion

(continued) The real point is that having two different limits gives the admin more control. If the admin choose to use separate partitions, then they can setup appropriate independent usage limits. If they use a single partition, then they can determine if they want indexing or searching to be stopped first when they are running low on disk space (seems like there is no obvious "right" answer for all cases as to which is preferred, so it seems best to the the admin decide based on their own usage situation.)

Super Champion

I somewhat agree with your summary index example. (From a technical standpoint, doesn't the search have to complete successfully before the .stash file gets created?) But even so, summary indexing seems to be a fragile feature in any case, so you need to know have to fix it manually. From my own experiences, I've had summary indexing problems due to indexing being paused (low disk space), forwarders doing down, and the most frequent is searches missed simply because splunkd was being restarted. Fortunately, it's much easier to fix with the fill_summary_index.py script in version 4.0.

0 Karma
Reply

Splunk Employee
Splunk Employee

With summary indexing, you might get bad data if dispatch runs out of space too. But not index corruption, just bad data. What would you want to happen in this case? Treat it like an exhausted quota?

0 Karma
Reply

Super Champion

Do you know if this will this be a separate setting? It seems like it should be, IMHO. Here is why: I certainly want a higher level of free space for my indexes than for the dispatch folder. (If search runs out of room, it's inconvenient, but if an index runs out of room you could end up with data corruption, which is what I really want to avoid.) If there is only one setting, for example, minFreeSpace=2000, and I increase $SPLUNK_HOME/var/run to 4000Mb, I've just added 2000Mb of space but I still can't really use more than 2000Mb of it. I've never seen searches cause a 2Gb spike.

Splunk Employee
Splunk Employee

Whoops, apparently 4.1 has a goofup where this setting is not honored.
4.1.1 will do it, i do not have a schedule.

Splunk Employee
Splunk Employee

I wish I knew the set of mountpoints we check offhand. I know we got "smarter" recently. I think it was that we checked var/lib/splunk, and now we get the fs for every index path, and verify they are all above the limit. None of those are var/run. Review the filesystems your indexes are on?

0 Karma
Reply

Super Champion

Whoops, this setting was in my server.conf, my original post was incorrect (not sure what I was looking at.) I was trying to point out that minFreeSpace doesn't seem to effect this message that I'm getting, so this must be an additional disk space check? (BTW. The reason why I have a separate $SPLUNK_HOME/var/run is because the searches can use up a bunch of space, but now it seems as though splunk wants even more space.