i am working on a Centralized Log Management project using the latest Splunk version on an indexer (installed on Windows server 2012) and forwarders installed on 2 Linux servers, 4 Windows 2012 servers, and 8 Windows PC's.
I installed the Splunk Enterprise Trial on the indexer for my pilot testing which has 500 MB daily indexing volume.
I need centralized full control on many forwarders from one indexer server, and I am facing below problems:
SPLUNKD service RAM utilization is very high on forwarder machines (around 60 MB and increasing constantly) and can't be controlled by changing any parameter in many parameters in the configuration files (inputs.conf, outputs.conf, limits.conf).
So, what are the parameters that control RAM utilization and where are they located on the forwarder PC (config file name and path).
Note that I configured inputs.conf using an app controlled by the indexer server.
I enabled SSL and the collected logs stored encrypted in indexes path, but i only want to encrypt traffic between the indexer and forwarder and store log files as is in the indexer server (Clear text).
I can only restart the forwarder service remotely using CLI, but I canNOT Manage forwarders remotely (Stop, Start, Uninstall) from indexer server.
Could you please provide your kind advice.
On a UF, splunkd memory usage increases as the number of inputs increases. Here is another question that gives some good advice about how to reduce memory usage.
Limit the memory used by the universal forwarder
In my own experience, without tuning, the UF uses about 40 MB of memory in general. When monitoring 2000 files, the UF uses about 50 MB of memory. In the past, I've seen the UF get very slow, with high memory and CPU usage, when monitoring over 5000 files.
I often find that people do not realize how many files Splunk is monitoring. Every file that is monitored, even if it is not being updated with new data, takes some resources. On the UF, run
splunk list monitor to see exactly what Splunk is monitoring. Sometimes you can dramatically improve UF performance simply by moving older files to an directory that is not being monitored by the UF.
You can use the the deployment server to manage your Splunk Forwarders. Though this is done via a subscription model which only manages the inputs and other setttings. If you are looking to manage the service you will have to use remote wmi/powershell for windows and commands via ssh for linux using a script. There is the remote-cli for splunk, but it doesnt allow for remote start and stop, if you stopped the service their would be no service listening for the start command.
As for the Splunk memory size, the memory will grow in size based on the number of inputs/monitors you have enabled. I seen 120MB for running just about every windows monitor under the sun.