Monitoring Splunk

Why is the memory usage on the SearchHead/Forwarder really high?

kmcg1
New Member

Hi, I have two Standalone Search Heads that also act as Heavy Forwarders running Splunk 6.5.1 on RHEL 6.9.

The past few days both instances have experienced potential memory leaks. Each morning, around the same time, both servers reach 100% memory usage and the splunkd process is killed by the kernel. Looking at the Historical Charts - Average Physical Memory Usage, it shows the climb up to 100% memory usage is linear on both servers, occurring gradually over several hours. Looking through various logs it is apparent the splunkd process is using up all of the memory, at which point the kernel kills the splunkd process.

There have been no significant changes in the environment the past few days, and I am unsure why this is behavior is developing now. Is there any way to further troubleshoot the specifics of what may be causing the increase memory usage on these two servers?

Thanks in advance for any help!

0 Karma
Get Updates on the Splunk Community!

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...

Combine Multiline Logs into a Single Event with SOCK: a Step-by-Step Guide for ...

Combine multiline logs into a single event with SOCK - a step-by-step guide for newbies Olga Malita The ...