Splunk Enterprise

Search Head physical memory utilization usage - increasing 2% per day

slider8p2023
Explorer

Hi All 

Problem description:

Search Head physical memory utilization increasing 2% per day

Instance deployment:

Running Splunk Enterprise Splunk version 9.0.3 using 2 Search Heads un-clustered with the main SH with this issue has allocated

48 CPU Cores | Physical Mem 32097 MB | Search Concurrency 10 | CPU usage 2% | Memory usage 57% | Linux 8.7

It is used to search across a cluster of 6 indexers.

I've had Splunk look into it who reported this could be due to an internal bug fixed in 9.0.7 and 9.1.2(Jira SPL-241171 ). The actual bug fix is by the following Jira:
SPL-228226: SummarizationHandler::handleList() calls getSavedSearches for all users which use a lot of memory, causing OOM at Progressive

A workaround to change the limits.conf in the form of do_not_use_summaries = true did not fix the issue. splunkd server process seem to be the main component increasing it's memory usage over time.

Splunk process restart seems to lower and restart the memory usage but trending upwards at a slow rate.

 

If anyone could share a similar experience so we can validate the Splunk support solution of upgrading to 9.1.2 based on the symptoms described above it would be appreciated.

Thanks  

 

0 Karma
Get Updates on the Splunk Community!

.conf25 Community Recap

Hello Splunkers, And just like that, .conf25 is in the books! What an incredible few days — full of learning, ...

Splunk App Developers | .conf25 Recap & What’s Next

If you stopped by the Builder Bar at .conf25 this year, thank you! The retro tech beer garden vibes were ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...