All Apps and Add-ons

Open file discriptors issue with ubuntu indexer


I just installed Splunk on Splunk App on my indexer.
Into the view Warnings and Errors I opened the Warnings dashboard.
There the point " 1024 is the maximum number of open file descriptors allowed per process by the operating system." is shown as a problem.

I followed the link into the dashboard and made the steps which are reporded in the community.
[Link to community][1].

I added the following entries into the /etc/security/limits.conf:

root                hard    nofile          8192
root                soft    nofile          8192
*                   hard    nofile          8192
*                   soft    nofile          8192

I also uncomment this line in the /etc/pam.d/su file:

session    required

Behind the changes I rebooted the server.

If I login with root or splunk or other user and type ulimit -a I get this output displayed.

root@srvXXX:/etc/pam.d# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 8192
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The command output said to me that everything is fine. But if I check the splunk log file I still see the message that the open file discriptor is to low.

Could someone tell me what the problem is and what I made wrong?


[1]:`enter code here`

0 Karma

Splunk Employee
Splunk Employee

Check in the splunk logs the unlimited detected by splunk at launch.
index=_internal source=*splunkd.log ulimit

0 Karma


If I check with ps -ef | grep splunk I see that all services are running as user root. Maybe that is not recommendet but for us it is okay.

If I run ulimit -a with user root I see "open files" is set to 8192.

I found something strange. If I restart the whole server the splunk service are started by the os. And then the "open files" size is 1024 (default setting). If I only restart the process (splunk) with the user root the "open files" size is 8192 as I configured. In both cases the splunk service is running with user root. Any idea's?

0 Karma

Splunk Employee
Splunk Employee

So they are 3 possible explanations :
- the ulimit are applied after splunk is started.
- the ulimit settings are not applied as you expect.
- the ulimit settings are applied under the root user, but splunk is not running as root, therefore gets different limits (user based)

The last one is likely to be the good one. Check the user running splunk, verify with
su ulimit -a

and refer to your system documentation to specify a ulimit -n for this user in the system configuration.

0 Karma



I used your search string and find out that splunk still has the limit of open files is set to 1024.
It seems to me that splunk ignore the os setting. Can I change this?

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!