Deployment Architecture

How do you change the ulimits with Splunk 6.1 when starting from init.d?

Lowell
Super Champion

Can anyone explain the ulimit (max open files) behavior of Splunk 6.1? I've tried the traditional approach (adding entries in limits.conf for the splunk user) and it seems to have no effect if I run splunk from the service script (/etc/init.d/splunk) as root. The max open files ulimit is always reported as 4096. If I log in as the "splunk" user manually, and run ulimit -n I see the value I set, 10240.

I know that in Splunk 6.1, splunk now handles switching from root to the designated non-privileged user internally. (The user is specified in SPLUNK_OS_USER in splunk-launch.conf, and no longer in /etc/init.d/splunk.) So I thought perhaps I'd need to increase the limits for root, so I did that as well. With no luck. Splunk still shows 4096 as the max open file limit on startup.

If I switch to the splunk user interactively, and run "splunk start" manually. Then the expected, 10240 ulimit values is shown in splunkd.log.

Of course starting Splunk manually isn't a great long-term option as the change doesn't persist across a simple reboot. (At startup, /etc/init.d/splunk start will be run as the root user, and not as the "splunk" user. And the ulimit will once again be set to 4096.)

Any ideas?

Tags (3)
0 Karma
1 Solution

Lowell
Super Champion

My solution has been to fall back to the old-school (Splunk 6.x and earlier) init.d style scripts. (Basically swapping out /opt/splunk/bin/splunk start with su - splunk /opt/splunk/bin/splunk start.) Essentially this ensures that Splunk isn't the one handling the user switching code and then everything works fine.

BTW, we've seen some other weird anomalies with files created by splunk randomly having the "root" group. I've also seen issues where group membership is somehow not correct if splunk switches users on it's own.

Haven't tried this since 6.1.2 days, so possibly some of these things have been fixed by now.

View solution in original post

Lowell
Super Champion

My solution has been to fall back to the old-school (Splunk 6.x and earlier) init.d style scripts. (Basically swapping out /opt/splunk/bin/splunk start with su - splunk /opt/splunk/bin/splunk start.) Essentially this ensures that Splunk isn't the one handling the user switching code and then everything works fine.

BTW, we've seen some other weird anomalies with files created by splunk randomly having the "root" group. I've also seen issues where group membership is somehow not correct if splunk switches users on it's own.

Haven't tried this since 6.1.2 days, so possibly some of these things have been fixed by now.

nawazns5038
Builder

How can we change the ulimits of Splunk to the desired value ?
I have edited the /etc/security/limits.conf file and rebooted the instance ?
I have added "* - nofile 64000" to the file .
But Splunk still shows only 4096. How can we change this value .

0 Karma

sloshburch
Splunk Employee
Splunk Employee

I am suspicious this could be some behavior related to http://docs.splunk.com/Documentation/Splunk/latest/Admin/ConfigureSplunktostartatboottime

After reboot, if you are running Splunk as a nonroot user, it could still load the old settings. Check out that link and hopefully that's what you've run into here.

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...