Getting Data In

Minimum Disk Space for Splunk 6.x Universal Forwarder - Is it really 5GB?

bandit
Motivator

I think the bare minimum used be about 250MB and I often find UFs are using under 200MB. Seems the the disk requirements went up substantially.
Is 5GB of disk really required for the Universal Forwarder?

From: http://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements#Recommended_hardw...
Important: For all installations, including forwarders, you must have a minimum of 5 GB of hard disk space available

1 Solution

jrodman
Splunk Employee
Splunk Employee

This comment is saying that you need to have 5GB available, not that it will use 5GB. This is because of the free-space checks the product makes to try to avoid running into the brick wall that is disk-space exhaustion. Probably we should tune the Universal Forwarder's requirement to be lower, because there is no way for it to use disk space rapidly.

As for the actual space used, it can end up pretty big (definitely larger than your 250MB estimate) from the total size of the various logs. If you need to fit a universal forwarder in a tiny space you may want to tune the number of backup copies from etc/log.cfg in an override file that you create /etc/log-local.cfg

View solution in original post

malmoore
Splunk Employee
Splunk Employee

Hi, docteam here.

I've spoken with the top product manager for Splunk and while he does not want to lower the guidance of 5GB for any Splunk instance, I've softened the language so that we now recommend that you maintain that level of free space available.

You should not need 5GB for a forwarder in many scenarios. However, to account for logging plus internal recordkeeping data (the fishbucket) it's a good idea not to run too lean, even on a forwarder (if it's forwarding a lot of things, for example).

I hope this helps.

jrodman
Splunk Employee
Splunk Employee

This comment is saying that you need to have 5GB available, not that it will use 5GB. This is because of the free-space checks the product makes to try to avoid running into the brick wall that is disk-space exhaustion. Probably we should tune the Universal Forwarder's requirement to be lower, because there is no way for it to use disk space rapidly.

As for the actual space used, it can end up pretty big (definitely larger than your 250MB estimate) from the total size of the various logs. If you need to fit a universal forwarder in a tiny space you may want to tune the number of backup copies from etc/log.cfg in an override file that you create /etc/log-local.cfg

bandit
Motivator

To be clear, I'm asking about a forwarder not a search head or indexer.

Any updates to the requirement for the Splunk Universal Forwarder?

Important: For all installations, including forwarders, you must have a minimum of 5GB of hard disk space available in addition to the space required for any indexes. See "Estimate your storage requirements" in the Capacity Planning Manual for more information.

0 Karma

bandit
Motivator

If a universal forwarder usually uses under 250MB, where is the 5GB minimum requirement coming from? (which is more than 20x the size of the disk foot print of a universal forwarder) Should it not read something like 500MB minimum, 2GB recommended?

0 Karma

jrodman
Splunk Employee
Splunk Employee

minFreeSpace is enforced by the indexer, which isn't even loaded into memory on a Universal Forwarder.

bandit
Motivator

Right, so shouldn't the documentation be updated?

0 Karma

jrodman
Splunk Employee
Splunk Employee

I don't know the full logic that went into the docs being written in that precise way. I would suggest commenting directly on the doc. I can pass it to the docs team generally but it seems more natural to raise the conversation directly with the docs team to keep yourself in the loop.

0 Karma

bandit
Motivator

Right, I 'm expecting in most scenarios it should naturally stay under 500MB. I Just wanted to do a sanity check that using a 2GB mount instead of a 5GB mount would not be an issue.

Thanks,

Rob

0 Karma

jrodman
Splunk Employee
Splunk Employee

It may well be an issue to use a 2GB mount -- I know we apply this check to index locations and the dispatch location, but I don't know if the check is disabled in the Universal Forwarder. It may not because the constraint is applied by the IndexProcessor (not loaded) and dispatch search machinery (not used, and would only block search anyway).

If the limit does apply, for the Universal forwarder, you may want to tune this downward in server.conf [diskUsage] minFreeSpace. Do NOT tune this value down in indexer unless you enjoy risk of data loss (written here for future readers).

0 Karma

bandit
Motivator

I'm specificaly interested in the mount size of a Universal Forwarder.

This is what the 5x version of the docs said:
Important: For all installations, including forwarders, you must have a minimum of 2 GB of hard disk space available in addition to the space required for any indexes. Refer to "Estimate your storage requirements" in this manual for additional information.

http://docs.splunk.com/Documentation/Splunk/5.0.10/Installation/Systemrequirements

Another way you could look at this. If you are upgrading the Universal Fowarder from 5.x to 6.x, do you need to increase the storage from 2GB to 5GB?

Maybe the 5GB should be a recommendation, however not a minium?

0 Karma

jrodman
Splunk Employee
Splunk Employee

Yes, I specifically talked about how this limit is applied. For most Splunk installs, the required free space on the filesystem will increase from 5.x to 6.x.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...