Hi,
Splunkd process running on the indexers in using more RAM memory .
With in last 7 days it has increased the usage from 9.8% to 70% on 20 GB RAM. It is killing the Splunk process after reaching the max. I'm using 5.0.2 Splunk on a linux VM. I'm using a nix app
and the memory by process is this
Resident_MB Virtual_MB Percentage Memory
splunkd 13379.101562 14352.585938 66.6
How to solve the issue ?
Clustering is not expected to use 20GB of RAM. In our tests we have not seen it go over 3GB. So, I would think that there is a associated memory leak. Which version of splunk is it? Please file a support case and we will help find the leak.
I'm aware of this but I thought splunk supports this.
Clustering is expensive :
- multiple all the disk space requirements by the replication factor.
- more network traffic
- more disk i/o
- more processing of the copies of the replicated buckets.
And it also can be resource intensive, if all the events have to be parsed twice at index time ( if you required multiple copies of the buckets to be searchable with a search factor > 1 ).
Yes, This has started after enabling the clustering. I don't have any index time anonymization or index time filtering.