Splunk Enterprise

RAM overload

splunkg
Explorer

Hello, we have this issue that our splunk manager and search head after around 1 up to 2 weeks increase in RAM. The splunkd service and some python scripts are the ones that slowly increment the usage of RAM over time and we had the issue that the splunkd service fails sometimes, since there is not enough ram for it to execute tasks. 

We have 16GB of ram after I restart the splunkd service it goes down to 4.6 GB RAM in use. Now like I said this will increase slowly up to 15.8GB over time. 

As you can see here it goes up to 12GB.

splunkg_0-1722845431963.png

Current splunk version: 9.2.0.1

Is there a known bug of memory leak for splunk itself ? Did somebody had the same issue already if so, how did you resolve this problem ? Thank you.

Labels (1)
0 Karma
1 Solution

splunkg
Explorer

 Upgraded to version 9.3.1 and the problem is gone.

View solution in original post

0 Karma

splunkg
Explorer

 Upgraded to version 9.3.1 and the problem is gone.

0 Karma

weiss_h
Explorer

Hi, if the issue exists on Windows then it sounds like the general Memory leak problem we have since Feb. this year and Splunk isn't realizing it, it seems.  See here: https://community.splunk.com/t5/Splunk-Enterprise/Memory-leak-in-Windows-Versions-of-Splunk-Enterpri...

Houlila
Engager

are you file monitoring? If you are, the issue has to do with the following:

  • When using File monitoring input or folder monitoring input do not use recursive search or three dot notations (...) instead prefer to use non-recursive search or asterisk notation (*).
    • Example: [monitor:///home/*/.bash_history] is much better programmatically then [monitor:///home/.../.bash_history]
  • When you must compulsorily use recursive search, then:
    • Make sure no. of total files under the main root directory you are searching is not huge.
    • Make sure there are no cyclic links that could cause Splunk to go into an infinite loop.
0 Karma

PickleRick
SplunkTrust
SplunkTrust

Well. Your screenshot shows... something. The only thing that I can try to deduce from it is that you're running your Splunk environment on Windows. (end event that is just a guess).

Anyway. CM - if it's not doing anything else - should have pretty constant memory usage determined by the size of your environment (number of indexers and buckets on those indexers). Of course it will start small and quite quickly build up memory usage as peers register with it and report their buckets but after that the memory usage growth should slow down significantly. If you have a constant linear growth... it might warrant a support case. Or you might simply have too small machine for your CM.

In the SH case though it's not that easy because it highly depends on activity - number of searches, the searches themselves, your users limits and so on. You could use monitoring console to see what your users are doing and what is consuming most memory.

Anyway, 12GB RAM is a minimum reference SH specification and it's very rarely enough. (and if you're using premium apps like ES or ITSI it's way below the recommended specs).

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...