As an alternative to relying on the Splunk 4.1.7 Light Forwarder to set and manage its own upper memory usage limit I’ve been investigating using `ulimit’ on Solaris servers to limit the maximum memory Splunk can use. I’ve set the following limits:
Virtual Memory 800,000 KB (ulimit –v 800000 – sets both Soft and Hard limits)
Heap Size 700,000 KB (ulimit –d 700000 – sets both Soft and Hard limits)
Despite these limits being more than 4 times the 198 MB maximum memory usage Splunk specify for their Splunk 4.1.7 Light Forwarder (with a 100 event output queue) ERROR messages are still written to splunkd.log about these limits being small! The following excerpt from splunkd.log shows this:
4-05-2011 13:46:59.282 INFO ulimit - Limit: virtual address space size: 819200000 bytes
04-05-2011 13:46:59.282 ERROR ulimit - Splunk may not work due to small virtual address space limit!
04-05-2011 13:46:59.282 INFO ulimit - Limit: mapped address space size: 819200000 bytes
04-05-2011 13:46:59.282 ERROR ulimit - Splunk may not work due to small mapped address space limit!
04-05-2011 13:46:59.283 INFO ulimit - Limit: data segment size: 716800000 bytes
04-05-2011 13:46:59.283 ERROR ulimit - Splunk may not work due to small data segment limit!
It therefore seems that these ERROR messages are generated if any hard ulimit is set even if these limits allow much more memory to be used by Splunk than it actually requires. In operational use I’d need to set these ulimits to lower values of Virtual Memory 200MB and Heap Size 170 MB. Can you therefore say if the ERROR messages about ulimit in splunkd.log can be safely ignored?
This can be safely ingored for Splunk Light Forwarder or Universal forwarder.
Currently we perform check to see if the configured memory is less than 1GB and produce the above ERROR message(We will open a a BUG to take care of this). LWF should not have problem running with above ulimit settings.