Deployment Architecture

Why can't I increase the number of open files?

mufthmu
Path Finder

Hi,

I'm running on Red Hat 7.3 and Splunk version is 7.3. The following edits were made to the /etc/security/limits.conf file inside the splunk container:

root    hard   nofile   202400
root    soft   nofile   102400
splunk hard nofile 202400 splunk soft nofile 102400

The /etc/pam.d/su files were also edited to add:

session  required  pam_limits.so

But when I check the splunkd logs, I still see the default value  of 65536. 

I have read few other similar questions too but still no luck. What did I miss here?

Thanks in advance

Labels (1)
Tags (1)
0 Karma

ephemeric
Contributor

Why do you need to run Splunk in a container? It complicates everything.

First, test your ulimit settings in a VM to see how and why things work the way they do.

Remove as much as you can. Simplify.

If you must use Docker, use the `--ulimit` option:

$> grep -i limit /usr/lib/systemd/system/docker.service
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity

$> ulimit -n
1024

$> docker run -it --ulimit nofile=64000:64000 alpine
/ # ulimit -n
64000
/ #

 

0 Karma

codebuilder
Influencer

Setting parameters in /etc/security/limits.conf is deprecated. Though it still (sort of) works, it is overridden by any configs set under /etc/security/limits.d/ which are evaluated in numerical/alphabetical order. Meaning a file named 99-mylimits.conf overrides 01-mylimits.conf.

Also, any limits.conf parameter set within a container will not affect the host OS.

----
An upvote would be appreciated and Accept Solution if it helps!
0 Karma

ephemeric
Contributor

If you have made many changes and want to restore SELinux contexts and can't remember all the dirs, you can:

touch /.autorelabel
reboot
0 Karma

rabbidroid
Path Finder

services started with the legacy init system, do not honor limits.conf, since it comes up right in the beginning of the system startup, before it reads the limits and pam.

As other mentioned, use systemd to start the service, and you can add it in the unit file

mufthmu
Path Finder

@rabbidroid systemd is used to start the splunk container in my AWS ec2.

Could you please elaborate what you meant by "you can add it in the unit file" ?

what unit file are we talking about here?

0 Karma

mufthmu
Path Finder

in my /etc/systemd/system/spunkd.service , I have added this stanze

[Service]
LimitNOFILE=88888
LimitNPROC=16000
LimitDATA=8589934592

and my /etc/security/limits.conf inside the container is below: (splunkd is run by root)

root soft nofile 99997
root hard nofile 99998
root soft nproc unlimited
root hard nproc unlimited

then I ran: systemctl restart splunkdocker . The container then restarted.

Then I ran docker exec to access the container. And then I ran: 

cat /proc/<splunkd_pid>/limits , and I got these results

Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        unlimited            unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             unlimited            unlimited            processes
Max open files            65536                65536                files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       124464               124464               signals
Max msgqueue size         819200               819200               bytes

 As you can see that "Max open files" did not change and it stayed the default value.

However, it works whenever I set the nofile to be less than 65536. I checked my system can handle over 3 milions open files, so 65k cant be the cap. I'm so confused at this point.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

I’m not so familiar with docker, but if I understood right you have changed splunkd config in local host not in docker? Or take docker into use those files directly from host’s own config?

0 Karma

mufthmu
Path Finder

@isoutamo correct, I made changes of splunkd config in host level and not inside the splunk container because file /etc/systemd/system/splunkd.service only exists in the host level and not in the container.

The only config that I made inside the container is the file /etc/security/limits.conf below:

root soft nofile 99999
root hard nofile 99999
root soft nproc unlimited
root hard nproc unlimited

* soft nofile 99995
* hard nofile 99995
* soft nproc unlimited
* hard nproc unlimited

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

unit file is created when you are adding splunk boot start for systemd. Of course you could do/modify those by hand. More information could found from https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/RunSplunkassystemdservice

One note: when you add memory to your ec2 instance you must remove and execute again splunk boot start procedure, or otherwise your new memory don’t updated to this unit file. 
r. Ismo

0 Karma

thambisetty
SplunkTrust
SplunkTrust

open /etc/init.d/splunk using any text editor and place ulimits  in between

. /etc/init.d/functions and splunk_start() { of file

then file should appear like below:

. /etc/init.d/functions

ulimit -Sn 64000

ulimit -Hn 64000

ulimit -Su 20480

ulimit -Hu 20480

ulimit -Hf unlimited

ulimit -Sf unlimited

splunk_start() {

 

And then restart the server to take effect of these changes.

 

don't forget to upvote if this helps

————————————
If this helps, give a like below.
Tags (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

currently the better (best) way for boot start is systemd on linux. There is instructions for take it into use.

https://docs.splunk.com/Documentation/Splunk/8.0.5/Admin/RunSplunkassystemdservice

There you could define those limit in this service file. Another benefit which you will get with systemd is Splunk workload manager setup. Systemd handling needed configurations if you are using old init then you must change/manage those by yourself separate.

r. Ismo

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

if you are using selinux then you must run restorecon -Rv FILE/directory

r. Ismo

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...