Monitoring Splunk

Why is the memory usage on the SearchHead/Forwarder really high?

kmcg1
New Member

Hi, I have two Standalone Search Heads that also act as Heavy Forwarders running Splunk 6.5.1 on RHEL 6.9.

The past few days both instances have experienced potential memory leaks. Each morning, around the same time, both servers reach 100% memory usage and the splunkd process is killed by the kernel. Looking at the Historical Charts - Average Physical Memory Usage, it shows the climb up to 100% memory usage is linear on both servers, occurring gradually over several hours. Looking through various logs it is apparent the splunkd process is using up all of the memory, at which point the kernel kills the splunkd process.

There have been no significant changes in the environment the past few days, and I am unsure why this is behavior is developing now. Is there any way to further troubleshoot the specifics of what may be causing the increase memory usage on these two servers?

Thanks in advance for any help!

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...