We have multisite indexer cluster: two sites, 4 indexers per site (Splunk v. 6.5.3)
Few months ago, following Splunk's recommendations, we increased ulimit -n to a higher value 16384 on all indexers
for root and splunk user.
The following command "ulimit -n" returns "16384" on all 8 servers for both users now.
To make these changes persistent across reboots our Unix SAs added this to the bottom of /etc/security/limits.conf file:
splunkuser soft nofile 16384
splunkuser hard nofile 16384
root soft nofile 16384
root hard nofile 16384
When we run health check via Splunk Monitoring Console, it finds that all 4 servers on site 1 have ulimits.open_files set to 4096 while
ulimits.open_files set to 16384 on all servers on site 2