All Apps and Add-ons

Open file discriptors issue with ubuntu indexer

krusty
Contributor

hi,
I just installed Splunk on Splunk App on my indexer.
Into the view Warnings and Errors I opened the Warnings dashboard.
There the point " 1024 is the maximum number of open file descriptors allowed per process by the operating system." is shown as a problem.

I followed the link into the dashboard and made the steps which are reporded in the community.
[Link to community][1].

I added the following entries into the /etc/security/limits.conf:

root                hard    nofile          8192
root                soft    nofile          8192
*                   hard    nofile          8192
*                   soft    nofile          8192

I also uncomment this line in the /etc/pam.d/su file:

session    required   pam_limits.so

Behind the changes I rebooted the server.

If I login with root or splunk or other user and type ulimit -a I get this output displayed.

root@srvXXX:/etc/pam.d# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 20
file size               (blocks, -f) unlimited
pending signals                 (-i) 16382
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 8192
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) unlimited
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

The command output said to me that everything is fine. But if I check the splunk log file I still see the message that the open file discriptor is to low.

Could someone tell me what the problem is and what I made wrong?

Thanks.

[1]: http://splunk-base.splunk.com/answers/13313/how-to-tune-ulimit-on-my-server`enter code here`

0 Karma

yannK
Splunk Employee
Splunk Employee

Check in the splunk logs the unlimited detected by splunk at launch.
index=_internal source=*splunkd.log ulimit

0 Karma

krusty
Contributor

If I check with ps -ef | grep splunk I see that all services are running as user root. Maybe that is not recommendet but for us it is okay.

If I run ulimit -a with user root I see "open files" is set to 8192.

edit:
I found something strange. If I restart the whole server the splunk service are started by the os. And then the "open files" size is 1024 (default setting). If I only restart the process (splunk) with the user root the "open files" size is 8192 as I configured. In both cases the splunk service is running with user root. Any idea's?

0 Karma

yannK
Splunk Employee
Splunk Employee

So they are 3 possible explanations :
- the ulimit are applied after splunk is started.
- the ulimit settings are not applied as you expect.
- the ulimit settings are applied under the root user, but splunk is not running as root, therefore gets different limits (user based)

The last one is likely to be the good one. Check the user running splunk, verify with
su ulimit -a

and refer to your system documentation to specify a ulimit -n for this user in the system configuration.

0 Karma

krusty
Contributor

Hi,

I used your search string and find out that splunk still has the limit of open files is set to 1024.
It seems to me that splunk ignore the os setting. Can I change this?

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...