Splunk cluster indexers are consuming high memory. Memory usage on indexer server is always at 99% used, after restarting splunk it's coming down but within one minute again reaching at 99%. Nothing coming in logs which indicates if anything causing this.
Also on same indexers internal_db is filling so quickly, are both issues related to each other.
We have 23 GB memory aligned to each indexer (total 5 in cluster) and we are logging around 400 -500 GB data on this environment.
Splunk version 7.2.3.
One more thing , is this know issue after upgrading to 7.x.x from 6.x , because while env were on 6.5.3 then we didn't face memory related issue but on that time we were logging around 300 GB data and memory aligned was 12 GB per indexer.
How many indexers are in the cluster? How many concurrent users are on your Search Head(s) and do you run a lot of scheduled searches (alets, reports)?
Also, there's an expected increase of the available RAM which an indexer will use with Splunk version 7.x.
However, you may want to consider upgrading to the latest 7.2.x version, which is 7.2.7 right now.
Do you have Monitoring Console deployed in your environment? In affirmative case, I suggest to you see the Indexer reports to see some important information like indexing pipeline, indexing receiving queue, forwarding issues and the use of hardware resources. Beside that, please see the role assigned to each one of your indexers.