Deployment Architecture

How do you change the ulimits with Splunk 6.1 when starting from init.d?

Lowell
Super Champion

Can anyone explain the ulimit (max open files) behavior of Splunk 6.1? I've tried the traditional approach (adding entries in limits.conf for the splunk user) and it seems to have no effect if I run splunk from the service script (/etc/init.d/splunk) as root. The max open files ulimit is always reported as 4096. If I log in as the "splunk" user manually, and run ulimit -n I see the value I set, 10240.

I know that in Splunk 6.1, splunk now handles switching from root to the designated non-privileged user internally. (The user is specified in SPLUNK_OS_USER in splunk-launch.conf, and no longer in /etc/init.d/splunk.) So I thought perhaps I'd need to increase the limits for root, so I did that as well. With no luck. Splunk still shows 4096 as the max open file limit on startup.

If I switch to the splunk user interactively, and run "splunk start" manually. Then the expected, 10240 ulimit values is shown in splunkd.log.

Of course starting Splunk manually isn't a great long-term option as the change doesn't persist across a simple reboot. (At startup, /etc/init.d/splunk start will be run as the root user, and not as the "splunk" user. And the ulimit will once again be set to 4096.)

Any ideas?

Tags (3)
0 Karma
1 Solution

Lowell
Super Champion

My solution has been to fall back to the old-school (Splunk 6.x and earlier) init.d style scripts. (Basically swapping out /opt/splunk/bin/splunk start with su - splunk /opt/splunk/bin/splunk start.) Essentially this ensures that Splunk isn't the one handling the user switching code and then everything works fine.

BTW, we've seen some other weird anomalies with files created by splunk randomly having the "root" group. I've also seen issues where group membership is somehow not correct if splunk switches users on it's own.

Haven't tried this since 6.1.2 days, so possibly some of these things have been fixed by now.

View solution in original post

Lowell
Super Champion

My solution has been to fall back to the old-school (Splunk 6.x and earlier) init.d style scripts. (Basically swapping out /opt/splunk/bin/splunk start with su - splunk /opt/splunk/bin/splunk start.) Essentially this ensures that Splunk isn't the one handling the user switching code and then everything works fine.

BTW, we've seen some other weird anomalies with files created by splunk randomly having the "root" group. I've also seen issues where group membership is somehow not correct if splunk switches users on it's own.

Haven't tried this since 6.1.2 days, so possibly some of these things have been fixed by now.

nawazns5038
Builder

How can we change the ulimits of Splunk to the desired value ?
I have edited the /etc/security/limits.conf file and rebooted the instance ?
I have added "* - nofile 64000" to the file .
But Splunk still shows only 4096. How can we change this value .

0 Karma

sloshburch
Ultra Champion

I am suspicious this could be some behavior related to http://docs.splunk.com/Documentation/Splunk/latest/Admin/ConfigureSplunktostartatboottime

After reboot, if you are running Splunk as a nonroot user, it could still load the old settings. Check out that link and hopefully that's what you've run into here.

0 Karma
Get Updates on the Splunk Community!

Splunk Search APIを使えば調査過程が残せます

   このゲストブログは、JCOM株式会社の情報セキュリティ本部・専任部長である渡辺慎太郎氏によって執筆されました。 Note: This article is published in both Japanese ...

Integrating Splunk Search API and Quarto to Create Reproducible Investigation ...

 Splunk is More Than Just the Web Console For Digital Forensics and Incident Response (DFIR) practitioners, ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...