Splunk Dev

Too many open files, but have a ulimit of 65536- is this too small?

robertlynch2020
Influencer

Hi 

I have Too many open files, but i have ulimit of 65536

I believe I have set my Splunk up correctly, but my Search head has crashed twice now in 2 days.

Is 65536 too small? Should i try and make it bigger?

 

 

 

bash$ cat /proc/32536/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        unlimited            unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             790527               790527               processes
Max open files            65536                65536                files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       1546577              1546577              signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us
hp737srv autoengine /hp737srv2/apps/splunk/

 

 

 

I am also getting the following messages from my 3 indexers (I have an indexer cluster)

robertlynch2020_0-1646742944051.png

 

When I run the following command, I can see Splunk 1 hour after startup taking 4554? 

 

 

 

bash$ lsof -u autoengine | grep splunk | awk 'BEGIN { total = 0; } $4 ~ /^[0-9]/ { total += 1 } END { print total }'
4554

 

 

 

So at the moment, I have made a case with Splunk, but I might have to put in nightly restarts if it keeps happening.

In the last few months, I have set up a heave forwarder to send in HEC data to the indexers. This data has been increasing, so I am not sure if this is the issue?

Thanks in advance

Tags (1)
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...