Monitoring Splunk

MC Health Check reports a different ulimit than the current server

ricotries
Path Finder

I have a RHEL 6.10 Splunk server and currently have the following configuration in /etc/security/limits.d/ for open file descriptors:

root        soft    nofile    64512
root        hard    nofile    80896

But when I run the Monitoring Console Health Check, it reports that the current ulimit.open_files = 4096. I tried using * instead of root but that didn't change anything.

I have seen this issue brought up but for RHEL 7.x that uses systemd instead of SysV.

Labels (1)
0 Karma
1 Solution

ricotries
Path Finder

I was able to permanently solve this by altering /etc/init.d/splunk. Placing the following at the beginning of the splunk_start() and splunk_restart() functions applies the changes at boot or when invoking an init script through service (no point in setting them when requesting a service status or stop):

ulimit -Sn 64514 2> /dev/null
ulimit -Hn 80896 2> /dev/null

When placing these limit settings within /etc/security/limits.d/ or limits.conf, Splunk for some reason doesn't inherit them, regardless of the user assigned to run Splunk, but changing the init file for Splunk seems to do the trick.

NOTE: This approach does not work if you restart Splunk using /$SPLUNK_HOME/bin/splunk start|restart, as this starts|restarts the software without calling its init file. For this approach to work you need to use "service splunk start|restart"

View solution in original post

0 Karma

ricotries
Path Finder

I was able to permanently solve this by altering /etc/init.d/splunk. Placing the following at the beginning of the splunk_start() and splunk_restart() functions applies the changes at boot or when invoking an init script through service (no point in setting them when requesting a service status or stop):

ulimit -Sn 64514 2> /dev/null
ulimit -Hn 80896 2> /dev/null

When placing these limit settings within /etc/security/limits.d/ or limits.conf, Splunk for some reason doesn't inherit them, regardless of the user assigned to run Splunk, but changing the init file for Splunk seems to do the trick.

NOTE: This approach does not work if you restart Splunk using /$SPLUNK_HOME/bin/splunk start|restart, as this starts|restarts the software without calling its init file. For this approach to work you need to use "service splunk start|restart"

View solution in original post

0 Karma

PavelP
Motivator

Hello @ricotries,

configure it directly in /etc/security/limits.conf file :

root soft nofile 64514
root hard nofile 80896

and reboot the machine.

# grep files /proc/$(pgrep bash)/limits
Max open files            64514                80896                files    

Let me know how it went.

0 Karma

ricotries
Path Finder

Before even writing it directly at /etc/security/limits.conf, checking Bash limits show the limits are the ones set under etc/security/limits.d/

When applied directly in etc/security/limits.conf there are no changes, Splunk processes still show soft/hard limits are equal to 4096.

0 Karma

codebuilder
Influencer

Did you cycle Splunk after making the changes?

You can also put the ulimit settings directly in the init script at /etc/init.d/splunk

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

PavelP
Motivator

Hello @ricotries,

do you run Splunk as root user?

  1. Login on RHEL 6 via ssh
  2. find the splunk process id with

    pgrep splunk

  3. find effective limits with (replace NNN with a process ID):

    cat cat /proc/NNN/limits

0 Karma

ricotries
Path Finder

Hey @PavelP, yes I currently run Splunk as root. All processes associated with Splunk currently have a limit of 4096 like MC is reporting (following your directions).

0 Karma
.conf21 Now Fully Virtual!
Register for FREE Today!

We've made .conf21 totally virtual and totally FREE! Our completely online experience will run from 10/19 through 10/20 with some additional events, too!