Security

How to get "splunk" user to read "root" user-owned files?

Builder

Our general policy is to not run applications on our servers as the "root" user. However, some log files get written with the read/write access to only the owner (not to any group, and not to others), and the owner is always "root".

When attempting to add a monitor stanza on this file in the inputs.conf for the forwarder, I get the following error:

Insufficient permissions to read file='/file/location/filename' (hint: Permission denied).

(The file in question is an SMTP maillog.)

Permissions on the file:

-rw------- root root

As I understand it, one of the perks/features of Splunk is that the non-root user (in this case, "splunk") should be able to be assigned read permissions to all files, even ones that are owned by root.

Changing the permissions on the file won't work because each night when the log rolls over, the permissions will get reset.

What's the way to get around this? If I remember correctly from things I've heard (and a few of the other Answers posts allude to this), there's some file on the host that can be modified to add the "splunk" user and grant read-only access to these root-owned files, but I don't know/understand the specifics of how to do that.

The forwarder is running on a Linux CentOS 5.11 system. The "splunk" user is a local user.

1 Solution

SplunkTrust
SplunkTrust

Giving sudo rights for the user 'splunk' to start splunk as 'root' doesn't help here. So let's skip that idea entirely.

Options are:

  1. Traditional Unix permissions model
  2. POSIX ACLs
  3. Something magical

In the traditional Unix permissions model, the kernel uses the user/group/other bits to decide if the process in question has the rights to read the file. You could set your /var/log directory (and subdirs) to be owned by group 'syslog' and make the 'syslog' group a supplemental group for the user 'splunk'. From there, you'll need to chmod g+s /var/log (and any other subdirs of /var/log) in order to force the files to be owned by the supplemental group. (A setgid on a directory makes the kernel create any new files in that directory with the group-ownership set to the group-ownership of the directory itself). You'll also need to change the umask for files written by the syslog server to something more like 007 instead of 022 or 077. That way, the files will have group-read. Between the umask enabling group-read, the setgid, and the supplemental group, Splunk should be able to read the files.

If you use POSIX ACLs, you can define additional privs above/beyond the standard Unix permissions model. So you could give the user 'splunk' specific read access to these files. As mentioned elsewhere in other answers here, the setfacl command is the route here. The problem here is that few tools properly understand POSIX ACLs and you can run into trouble where newly created files (post-logrotate) may not get the right ACL applied every time. Another problem is that even though these have POSIX in their name, the userspace command set and semantics for configuring them is wonky from OS to OS. Linux, AIX, and Solaris all do it slightly differently. This works, but expect to put some energy into making it work.

As far as magical things, there's the concept of "capabilities". Both Linux and Solaris have the concept of "I can give this process environment something greater than normal user, but less than full root." POSIX introduced the idea of "CAP_DAC_READ_SEARCH" which is basically "read only root" -- the kernel will not do checks to see if your process has access to read any file. With this capability flag, Splunk can transparently read any and all files as if it were root, but the ability to write files is kept per normal. With Solaris, this is available when launching Splunk via SMF. With Linux, there is setcap. But, as of Splunk 6.2.3, the setcap approach on Linux DOES NOT WORK. I won't go into details, because they are ugly. I'm hoping to get a support case out there soon and see if it can be made to work...

View solution in original post

Path Finder

Previous post will work on AWS Splunk AMI and RHEL/CENTOS.

Path Finder

Here's a setfacl command that seems to work for /var/log (run it as root):

setfacl -Rm u:splunk:rX,d:u:splunk:rX /var/log

"-Rm" says do it recursively to /var/log and all subdirectories and get the permissions from the command line
"u:splunk:rX" says set permissions for user splunk to be allowed to read the file and execute it if it's a directory or has execute
"d:u:splunk:rX" says set default permissions the same as above for newly created files and directories
"/var/log" says apply the permissions to /var/log

Path Finder

I agree with not running Splunk as root. I would strongly look for a way to change the permissions on the files. Like @emiller42 said you can not bypass the Unix security. If you can not change the way the log file is created; a workaround could include using a cron job to change the permissions of the log file, splunk should try to access the file till it is successful. Running a universal forwarder as root is not nearly as scary as running Splunk Enterprise as root, but it's still a risk.

SplunkTrust
SplunkTrust

If you disable the REST port as @acharlieh suggests that risk goes down a lot more, as there is little to no remote attack surface. But, there are other things to worry about too. My favorite is whether or not you use deployment server, and is your SSL configuration between the UF and the Deployment Server sufficiently robust? The person who controls deployment server controls the user running Splunk on every deployment client. Are you SURE nobody can impersonate your DS on the network, and push an app that runs a new script as root?

SplunkTrust
SplunkTrust

Giving sudo rights for the user 'splunk' to start splunk as 'root' doesn't help here. So let's skip that idea entirely.

Options are:

  1. Traditional Unix permissions model
  2. POSIX ACLs
  3. Something magical

In the traditional Unix permissions model, the kernel uses the user/group/other bits to decide if the process in question has the rights to read the file. You could set your /var/log directory (and subdirs) to be owned by group 'syslog' and make the 'syslog' group a supplemental group for the user 'splunk'. From there, you'll need to chmod g+s /var/log (and any other subdirs of /var/log) in order to force the files to be owned by the supplemental group. (A setgid on a directory makes the kernel create any new files in that directory with the group-ownership set to the group-ownership of the directory itself). You'll also need to change the umask for files written by the syslog server to something more like 007 instead of 022 or 077. That way, the files will have group-read. Between the umask enabling group-read, the setgid, and the supplemental group, Splunk should be able to read the files.

If you use POSIX ACLs, you can define additional privs above/beyond the standard Unix permissions model. So you could give the user 'splunk' specific read access to these files. As mentioned elsewhere in other answers here, the setfacl command is the route here. The problem here is that few tools properly understand POSIX ACLs and you can run into trouble where newly created files (post-logrotate) may not get the right ACL applied every time. Another problem is that even though these have POSIX in their name, the userspace command set and semantics for configuring them is wonky from OS to OS. Linux, AIX, and Solaris all do it slightly differently. This works, but expect to put some energy into making it work.

As far as magical things, there's the concept of "capabilities". Both Linux and Solaris have the concept of "I can give this process environment something greater than normal user, but less than full root." POSIX introduced the idea of "CAP_DAC_READ_SEARCH" which is basically "read only root" -- the kernel will not do checks to see if your process has access to read any file. With this capability flag, Splunk can transparently read any and all files as if it were root, but the ability to write files is kept per normal. With Solaris, this is available when launching Splunk via SMF. With Linux, there is setcap. But, as of Splunk 6.2.3, the setcap approach on Linux DOES NOT WORK. I won't go into details, because they are ugly. I'm hoping to get a support case out there soon and see if it can be made to work...

View solution in original post

Builder

Sorry for the late response, I lost track of this...

We ended up creating a special group to which the "splunk" user can belong and giving that group read permissions on the files in question.

SplunkTrust
SplunkTrust

Hi,

I think I managed to get the setcap CAP_DAC_READ_SEARCH working but still very early to confirm.
In summary, this is what I've done to prevent those "error while loading shared libraries" you get when trying to start Splunk after the setcap:

As root
------------
setcap CAP_DAC_READ_SEARCH+ep /opt/splunkforwarder/bin/splunkd
echo "/opt/splunkforwarder/lib" >> /etc/ld.so.conf.d/splunk.conf
ldconfig

Now I'm trying to get all the different Linux and Unix App scripts working with the minimum set of permissions. Will probably post it somewhere if I ever manage to do so.

Thanks,
J

SplunkTrust
SplunkTrust

I would be happy if this works, but I'm guessing it does not. I did something similar in testing, and while it solves one issue, a new one pops up. Splunk uses the access() to test for ability to read files, and Linux interprets this differently when capabilities are enabled .. and not in the way we need it to be interpreted. So I think you're gonna hit a wall here.

Good news is I do have a case open on these types of issues and Splunk is at least looking at it. If you have a support agreement, I would strongly suggest that you file a support case of your own explaining how supporting capabilities like CAP_DAC_READ_SEARCH on Linux would make you so happy as a customer.

0 Karma

Influencer

I think maillog is written by syslog, so the permissions on the file should be able to be managed by whatever syslog config there is for your system, or maybe if there's even separate logrotate config... but this is a bit outside of my direct experience.

Communicator

I think acharlieh is right. Assuming the log is being written by syslog you should be able have it write the correct permissions.

However if this isn't possible, you might be able to use 'setfacl' Unix command on the directory the file is in to add explicit perms for your Splunk user.

Something like this:
setfacl -m u:splunkuser:r /var/log

0 Karma

Builder

My data center tech tells me this about using the setfacl command on the particular file in question...

"The file rotates away, and along goes
the file acl with it. File access
permissions reside associated with the
file inode, and not the directory,
which means there is little chance of
the acl sticking, leaving the original
file intact upon rollover."

He says the same problem would exist if we used the setfacl on the directory that the file resides in.

Will the log's rotation really break any acl we set this way? Seems like that would basically defeat the purpose of the command...

(I suppose I could always just try it myself and see what happens.)

0 Karma

Esteemed Legend

You can give user splunk sudo by adding this entry AT THE BOTTOM of this file /etc/sudoers:

splunk ALL = (root) NOPASSWD: /opt/splunk/bin/splunk

Then run Splunk like this:

sudo /opt/splunk/bin/splunk start
0 Karma

Motivator

Splunk cannot override Unix security. If it could, that would be a Very Bad Thing. For Splunk to read a log, it must have permission to do so.

Therefore, if your logs are root:root 600, then you have two options: Run Splunk as root, or change the permissions on the logs.

Running a Splunk forwarder as root is somewhat common because of these permissions quandaries. It's simply an agent running locally which does not need to accept any incoming connections. (Everything a forwarder does is outbound initiated) Basically, the exposure of running a forwarder as root is relatively minimal. (Compared to the Splunk server which I would never recommend running as root) Yes, if Splunk is compromised the host is effectively rooted. But you have to be local to compromise Splunk, meaning you're likely rooted anyway.

There is also the fact that your application is writing logs to root:root 600 in the first place, which means your app runs as root. Just as much of a security risk as running Splunk as root, to be honest. That may not matter, but it's a precedence you can point to.

Influencer

One note to @emiller42's comment: Actually you would not necessarily have to be local to compromise a UF. Just like full blown Splunk it opens port 8089 for the REST API. Now there are options around firewalls, turning this off, and other protections that could be taken, but by itself, a UF running as root opens a service as root that assuming the existence of an exploit, could be compromised and root the box.

Motivator

I forgot about the REST endpoint. (In our setup, we don't allow that incoming traffic, so it's mitigated) Thanks for the catch!

Builder

Won't that run Splunk as the "root" user, though, not as the "splunk" user?

0 Karma

Esteemed Legend

The other thing you can do is try to make sure that new files get +gr ("Group Read) permission in that directory (assuming user splunk is in the root group). The group ownership can be inherited by new files and folders created in your folder /path/to/parent by setting the setgid bit using chmod g+s like this:

chmod g+s /file/location/

Now, all new files and folder created under /file/location/ will have the same group assigned as is set on /file/location

POSIX file permissions are not inherited; they are given by the creating process and combined with its current umask value and you can use POSIX ACLs to control this; to set the default ACL on a directory:

setfacl -d -m u::rwX,g::rX,o::- /file/location/

This will apply setfacl to the /file/location/ directory, -modifying the -default ACLs – those that will be applied to newly created items. (Uppercase X means only directories will receive the +x bit.)

Also, If necessary, you can add a u:someuser:rwX or g:someuser:rwX – preferably a group – to the ACLs.

0 Karma

Builder

Yes, running Splunk as the root user is a problem.

We can't easily change the file permissions because it would require altering our application (we have all sorts of custom stuff under the hood)...and it's a piece of code that hasn't even been looked at by anyone in at least 5 years, probably longer. The current rotation resets permissions back to what I've listed in the post.

0 Karma

Esteemed Legend

The last solution does not require you to modify the application; unless the application specifies/enforces the permissions (only one way to find out), this will work because it is modifying the OS, not the application. You have nothing to lose by trying.

0 Karma

Champion

If it's syslog, what daemon are you running? You should be able to modify the permissions of the output file via the syslog daemon, not your app.

0 Karma