Deployment Architecture

Splunk process killed by OS [out of memory] in Search head

Yod_ssoni
Explorer

Hi,
Today morning 2 search heads out of 3 from cluster went down. When i checked it killed by OS with message 'out of memory' in /var/log/message, but system had enough memory at that time when splunk process got killed by OS [around 38% was free]. In splunkd.log file i did not find any error messages. Can any one please let me know how to get the root cause of this issue and fix it. This already happen 3-4 times.
Thanks in advance.

thanks,
Shashank Soni.

0 Karma

woodcock
Esteemed Legend

THP being enabled is the #1 reason for poor Splunk RAM management. Run a health check from your MC and see if everything is setup correctly. This will check for ulimits, too.

lacastillo
Path Finder

Just wanted to add this link describing THP's impact on memory to woodcock's answer.

http://docs.splunk.com/Documentation/Splunk/7.1.1/ReleaseNotes/SplunkandTHP

Hope this helps.

0 Karma

klaxdal
Contributor

Have you checked your ulimits ?

jkat54
SplunkTrust
SplunkTrust

Ulimits can trigger OOM killer too from what I understand. Up voting.

0 Karma

jkat54
SplunkTrust
SplunkTrust
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...