Monitoring Splunk

Splunkd process on the indexer in clustering using too much RAM

ssankeneni
Communicator

Hi,

Splunkd process running on the indexers in using more RAM memory .

With in last 7 days it has increased the usage from 9.8% to 70% on 20 GB RAM. It is killing the Splunk process after reaching the max. I'm using 5.0.2 Splunk on a linux VM. I'm using a nix app
and the memory by process is this
Resident_MB Virtual_MB Percentage Memory
splunkd 13379.101562 14352.585938 66.6

How to solve the issue ?

yannK
Splunk Employee
Splunk Employee
  • Did it started after you configured clustering ? (if yes, how many indexers do you have, what is your replication factor and your search factor)
  • do you have index time anonymization (sedcmd), or or index time filters using expensive Regexes ?
0 Karma

jkerai
Splunk Employee
Splunk Employee

Clustering is not expected to use 20GB of RAM. In our tests we have not seen it go over 3GB. So, I would think that there is a associated memory leak. Which version of splunk is it? Please file a support case and we will help find the leak.

0 Karma

ssankeneni
Communicator

I'm aware of this but I thought splunk supports this.

0 Karma

yannK
Splunk Employee
Splunk Employee

Clustering is expensive :
- multiple all the disk space requirements by the replication factor.
- more network traffic
- more disk i/o
- more processing of the copies of the replicated buckets.

And it also can be resource intensive, if all the events have to be parsed twice at index time ( if you required multiple copies of the buckets to be searchable with a search factor > 1 ).

ssankeneni
Communicator

Yes, This has started after enabling the clustering. I don't have any index time anonymization or index time filtering.

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!