I am hoping someone could provide some comments/replies to check if we are able to limit the max memory usage for Splunk Universal Forwarders. If yes, Is the config filename "limit.conf" ?
Just to add also,
-Will there be any issues arising from limiting the memory usage?
-I understand we can also limit the memory usage(have not tested yet) on the OS level, any advantages/disadvantages?
Where can I also get a formal solution from Splunk which mentioned that the configuration is possible.
This is not supported natively in Universal Forwarder now. You could do this at OS/user levels. There will be negative impacts on performance if the forwarder workload requires memory more than the specified limit.
But you can optimize the forwarder workload, check this: https://community.splunk.com/t5/Getting-Data-In/Is-there-a-way-set-CPU-and-Memory-consumption-for-sp...
Hi
I think that you could set max memory for UF (and also for splunk server) by systemd configuration. This needs that you update MemoryLimit parameter. Originally it has set when you enable boot start based on your host current memory.
I'm not sure if this can inform UF that don't try to use more memory than this or is it just for systemd for kill UF if it try to use more memory?
r. Ismo
Hi,
Just to confirm/enquire more on this, what you meant is that we will be creating a service/script to run on the particular server ? Or there is already a Splunk default config file which have the settings for us to edit.
When you are enabling splunk boot start (please check exact syntax from docs) as systemd managed version, splunk create systemd config file into /etc/systemd/system. Its name is Splunk’s.service or something similar. You could change this if needed/wanted by splunk-launch.conf.
As I earlier said, when you are running that “splunk enable boot-start ….” as a root, it creates systemd conf file with standard values based on your host current physical attributes. If you want to restrict memory usage just decrease that memory parameter. I suppose that this restricts the Splunk’s memory usage.
This is not supported natively in Universal Forwarder now. You could do this at OS/user levels. There will be negative impacts on performance if the forwarder workload requires memory more than the specified limit.
But you can optimize the forwarder workload, check this: https://community.splunk.com/t5/Getting-Data-In/Is-there-a-way-set-CPU-and-Memory-consumption-for-sp...
Hi,
Thanks for the info.
"There will be negative impacts on performance if the forwarder workload requires memory more than the specified limit." - This was also one of our concerns as we do have some UF that are configured to monitor quite a number of locations eg. > 20.
In your experience, so far how much memory might be used in this case?
So far I have seen Splunk services using up to 4GB on Windows server and impacting other constructs, but the cause was due to the Splunk UF installation was not installed properly and causing memory leak.