Splunk Enterprise

Search Head physical memory utilization usage - increasing 2% per day

slider8p2023
Loves-to-Learn Lots

Hi All 

Problem description:

Search Head physical memory utilization increasing 2% per day

Instance deployment:

Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered with the main SH with this issue has allocated

48 CPU Cores | Physical Mem 32097 MB | Search Concurrency 10 | CPU usage 2% | Memory usage 57% | Linux 8.7

It is used to search across a cluster of 6 indexers.

I've had Splunk look into it who reported this could be due to an internal bug fixed in 9.0.7 and 9.1.2(Jira SPL-241171 ). The actual bug fix is by the following Jira:
SPL-228226: SummarizationHandler::handleList() calls getSavedSearches for all users which use a lot of memory, causing OOM at Progressive

A workaround to change the limits.conf in the form of do_not_use_summaries = true did not fix the issue. splunkd server process seem to be the main component increasing it's memory usage over time.

Splunk process restart seems to lower and restart the memory usage but trending upwards at a slow rate.

 

If anyone could share a similar experience so we can validate the Splunk support solution of upgrading to 9.1.2 based on the symptoms described above it would be appreciated.

Thanks  

 

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud | Unified Identity - Now Available for Existing Splunk ...

Raise your hand if you’ve already forgotten your username or password when logging into an account. (We can’t ...

Index This | How many sides does a circle have?

February 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Registration for Splunk University is Now Open!

Are you ready for an adventure in learning?   Brace yourselves because Splunk University is back, and it's ...