Dear All.
I have a problem with the number of files opened by Linux, I have set the ulimit parameter to 999999 but I still have Splunk service crashes due to file descriptors, this happens in the Search heads, is there a way to tell Splunk not to open more files ?, I have tried with
[inputproc]
max_fd = 120000
but it keeps opening many more files
The Linux version is
Oracle Linux Server release 7.8
How I can count the number of open files per minute by query in Splunk?
thanks
Yes, the parameter 999999 was set for the splunk user, but it still crashes and another symptom that I have detected is that when many files are opened in the SH port 8089 is queued.
With systemd-managed service the limits are defined in the service unit file - /etc/systemd/system/<whatever_you_called_your_service> (by default it's called Splunkd.service_<date>).
There you have whole [Service] section which contains the limits (among other things).
If you edit the file, remember to reload the systemd configuration so it picks up new settings.
Hi @Gabriel_CCI,
the smartest solution is to open a Case to Splunk Support.
Did you used Monitor Console Health Check, to be sure that the ulimit is applied?
Ciao.
Giuseppe
Hi @Gabriel_CCI,
did you performed the Monitor Console Health Check?
In this way you can be sure about the parameter.
Anyway, you have to configure ulimit for the user running Splunk (usually splunk or root,
in othe words, in "/etc/security/limits.conf" you have to configure (if your Splunk is running under the "splunk" user) :
splunk hard nofile 999999
splunk soft nofile 999999
Ciao.
Giuseppe