I have a RHEL 6.10 Splunk server and currently have the following configuration in /etc/security/limits.d/ for open file descriptors:
root soft nofile 64512
root hard nofile 80896
But when I run the Monitoring Console Health Check, it reports that the current ulimit.open_files = 4096. I tried using * instead of root but that didn't change anything.
I have seen this issue brought up but for RHEL 7.x that uses systemd instead of SysV.
I was able to permanently solve this by altering /etc/init.d/splunk. Placing the following at the beginning of the splunk_start() and splunk_restart() functions applies the changes at boot or when invoking an init script through service (no point in setting them when requesting a service status or stop):
ulimit -Sn 64514 2> /dev/null
ulimit -Hn 80896 2> /dev/null
When placing these limit settings within /etc/security/limits.d/ or limits.conf, Splunk for some reason doesn't inherit them, regardless of the user assigned to run Splunk, but changing the init file for Splunk seems to do the trick.
NOTE: This approach does not work if you restart Splunk using /$SPLUNK_HOME/bin/splunk start|restart, as this starts|restarts the software without calling its init file. For this approach to work you need to use "service splunk start|restart"
I was able to permanently solve this by altering /etc/init.d/splunk. Placing the following at the beginning of the splunk_start() and splunk_restart() functions applies the changes at boot or when invoking an init script through service (no point in setting them when requesting a service status or stop):
ulimit -Sn 64514 2> /dev/null
ulimit -Hn 80896 2> /dev/null
When placing these limit settings within /etc/security/limits.d/ or limits.conf, Splunk for some reason doesn't inherit them, regardless of the user assigned to run Splunk, but changing the init file for Splunk seems to do the trick.
NOTE: This approach does not work if you restart Splunk using /$SPLUNK_HOME/bin/splunk start|restart, as this starts|restarts the software without calling its init file. For this approach to work you need to use "service splunk start|restart"
Hello @ricotries,
configure it directly in /etc/security/limits.conf file :
root soft nofile 64514
root hard nofile 80896
and reboot the machine.
# grep files /proc/$(pgrep bash)/limits
Max open files 64514 80896 files
Let me know how it went.
Before even writing it directly at /etc/security/limits.conf, checking Bash limits show the limits are the ones set under etc/security/limits.d/
When applied directly in etc/security/limits.conf there are no changes, Splunk processes still show soft/hard limits are equal to 4096.
Did you cycle Splunk after making the changes?
You can also put the ulimit settings directly in the init script at /etc/init.d/splunk
Hello @ricotries,
do you run Splunk as root user?
find the splunk process id with
pgrep splunk
find effective limits with (replace NNN with a process ID):
cat cat /proc/NNN/limits
Hey @PavelP, yes I currently run Splunk as root. All processes associated with Splunk currently have a limit of 4096 like MC is reporting (following your directions).