The Splunk indexer and forwarders in my environment are configured to run as the "splunk" user for security reasons. Of course, this means that Splunk can no longer read root owned log files. The first two thoughts to cross my mind were to either use filesystem ACLs to provide read access to the splunk user or employ the use of a dedicated usergroup.
I'm just curious to see what people in this type of environment are doing to get around this issue? Have you run into any specific issues one way or the other?
By combining the above provided solutions and to share with all with how I have implemented is as per below:
# touch /var/log/splunklog
# setfacl -Rm g:splunk:rX,d:g:splunk:rX /var/log
# cat /etc/logrotate.d/splunk_acl
/var/log/splunklog
{
postrotate
/usr/bin/setfacl -Rm g:splunk:rX /var/log
touch /var/log/splunklog
endscript
}
The above solution gives splunk group r-x recursive access to directories within /var/log and read only access to files.
The following is a variation on the answer by @quixad with detailed steps (I've tested them on CentOS 6.9):
Create file
vi /etc/logrotate.d/Splunk_ACLs
and populate with the logs you'd like to forward to splunk.
/var/log/splunklog
{
postrotate
/usr/bin/setfacl -m g:monitor:rx /var/log/cron
/usr/bin/setfacl -m g:monitor:rx /var/log/maillog
/usr/bin/setfacl -m g:monitor:rx /var/log/messages
/usr/bin/setfacl -m g:monitor:rx /var/log/secure
/usr/bin/setfacl -m g:monitor:rx /var/log/spooler
/usr/bin/setfacl -m g:monitor:rx /var/log/php-fpm/error.log
touch /var/log/splunklog
endscript
}
Create a dummy log file to keep logrotate happy
sudo touch /var/log/splunklog
Test if the configuration is valid. You may want to create a copy of your configuration file first.
sudo logrotate -vf Splunk_ACLs
I like the idea of using this dummy file to get up and running. One additional hint: it's better not to set the x-bit as log files are not executed (at least they should not be executed 😉 ).
The above by quixand worked for me as well on Redhat/CentOS. We also added the -d and -R flag to the setfacl command to set the defaults for the directory and make the change recursive.
sudo setfacl -Rdm g:splunk:rx /var/log/
The results of getfacl then include:
default:group:splunk:r-x
,
You should keep in mind, that this way you allow splunk to read each and every log file after rotating.
Also every logfile gets the x-bit set, which is at leastnot necessary and could be a security risk. Alternattively you could do the following (which still gives all rotated logfiles read-permisson for splunk
😞
setfacl -m g:splunk:rx /var/log
setfacl -dm g:splunk:r /var/log
This way splunk
still can enter the /var/log
directory and can read - but not execute - all newly created (i.e. all rotated) files.
On Redhat we run splunk in its own user/group. Then add read only ACL permissions to the splunk group to specific files.
You can manually set the ACL with
sudo setfacl -m g:splunk:rx /var/log/messages
This will not persist as logrotate will not re-apply the ACL setting so for a more permanent solution we have added a rule to logrotate to reset the ACL.
we added the file..
/etc/logrotate.d/Splunk_ACLswith
{
postrotate
/usr/bin/setfacl -m g:splunk:rx /var/log/cron
/usr/bin/setfacl -m g:splunk:rx /var/log/maillog
/usr/bin/setfacl -m g:splunk:rx /var/log/messages
/usr/bin/setfacl -m g:splunk:rx /var/log/secure
/usr/bin/setfacl -m g:splunk:rx /var/log/spooler
endscript
}
Check the ACL status of a file with
$ getfacl /var/log/messages
As written below for the other answers on setfacl: do not set the x-bit on the files. They are not (or should not be) executed.
I just get the error lines must begin with a keyword or a filename (possibly in double quotes)
with this kind of configuration.
for more info on ACLs take a look at https://help.ubuntu.com/community/FilePermissionsACLs
also posted to serverfault http://serverfault.com/questions/258827/what-is-the-most-secure-way-to-allow-a-user-read-access-to-a...
Good question.
I'm running on the "inferior" Linux platform... 😉
I've generally found the easiest approach is to add splunk to my local admin group--the only group of users who are allowed to see the log files (besides root, or whoever actually owns the log file.) This may not be the "best" approach, but it keeps splunk only having read-only access to the logs, and doesn't require making log files world-readable.
We use mostly syslog-ng
internally. We have syslog-ng write everything to local files, and then use splunk forwarders to read the log files and forward the events to our central indexer. (If all the logs went through syslog, then we may consider just doing the syslog forwarding thing, but there always seems to be one or two apps that don't use syslog. Therefore we need a splunk forwarder anyways, so it's just easier to have the splunk forwarder pickup all the log files.)
We stick the following file permissions settings in our syslog-ng.conf
file:
options {
...
owner(root);
group(adm);
perm(0640);
...
};
Then for any logs that rely on logrotate
for creation, I use settings like this:
/var/log/log_file_name {
...
create 0640 root adm
delaycompress
olddir old
dateext
....
}
I recommend the delaycompress
setting, just in case splunk is still reading events from the file at the point that it is being rotated (which isn't super likely, and may be handled automatically in any case, but it makes me feel better). The delay option also lets you index not only the primary log file but the previously rotated one as well. (Splunk will detect that it was rotated and will continue reading it from where it left off -- just in case any events remained that weren't indexed yet. This is not usually a problem, but say splunkd
was restarted just a few seconds before the log rotation kicked off, you could end up with a few missing events on some busy log files.)
I also like sticking the old logs in a separate directory using olddir
, which lets me keep more history but not clutter up the main log directory (although since we now use splunk to store them for much longer, we may reconsider this and simply delete them more frequently. I don't know, but it's something to think about. At least until you get comfortable with splunk, you may want to keep your existing retention policy, just in case...) Either way, you need to make sure splunk knows not to try to read the compressed version of your rotated log files. Normally splunk will detect rotated log files and not re-process them, but if you rotate and then compress, splunk doesn't recognize that your logfile.20100629.gz
file is the same file as logfile
from two days ago, and therefore your events are loaded twice (once in the original log file, and then once with the compressed file.) There are a number of ways to avoid this problem this, but it could be easily overlooked.
I like the dateext
option too. This is especially helpful if you have to go back and reload log files from a previous year. (Splunk will pickup the timestamp from the filename and therefore get the year correct. Since the default syslog timestamp format doesn't include the year, splunk has to guess, and can often get it wrong, unfortunately.) This also make it easier on any incremental backup tools since you don't keep renumbering the same file over and over. But generally speaking, splunk will handle either well automatically.
I do have an older linux system with an older version syslog-ng that doesn't support setting the file mode and group, so I had to make a simple cron job that goes around and does a bumch of chmod
and chgrp
commands to reset the permissions after a log rotation. I feel like this is sloppy, but its' the best solution I could come up with on that system.
And of course, the risk of running splunk as root is that is can easily index file you don't want it to. (For example /etc/passwd
or /etc/shadow
, which is a known issue by the way...) Not to mention any kind of security vulnerability that could let a remote use take over splunk and easily gain root access.
If you do have the need to have splunk run saved searches that executes scripts, you can always setup sudo
to allow to run very specific commands as root without requiring a password. In my opinion, this is a much safer approach than running splunk as root
and hoping that it doesn't execute something that it shouldn't. And setting up sudo in this way isn't all that difficult either.
Also in terms of security, make sure you change the splunk admin
password. Even on your forwarders where splunkweb
isn't running. You can remote control your splunkd
process over port 8089. (Simply open a browser to https://your.splunk.forwarder:8089/services to see what I mean.) If you don't want to do any remote splunkd
admin kind of stuff, then you may also want to block that port with a local firewall, but changing that password should be done in either case.
For (current in 2018) reference - Regarding port 8089 on Universal Forwarders:
In server.conf
use:
[httpServer]
disableDefaultPort = true
We set this in a special deployment app for all universal forwarders.
If you're using Solaris 10, the fine grained privileges model makes this really easy. In your SMF manifest, do this:
<method_context project=':default' resource_pool=':default' working_directory=':default'>
<method_credential group='splunk' limit_privileges=':default' privileges='basic,file_dac_read,net_privaddr' supp_groups=':default' user='splunk'/>
<method_environment>
<envvar name='HOME' value='/opt/splunk'/>
</method_environment>
</method_context>
Giving the service the service the net_privaddr privilege allows it to bind to low ports without ever having been root, and file_dac_read allows it to read other users' files that it would not otherwise be able to read.
If you're on an inferior operating system like linux, there will be significantly more klugding, I would think 😕
I have noticed that when I install the *nix app, I am still receiving audit.log and other entries that are usually owned by root. How are we doing this? We would like to run as splunk on indexers and search heads but the concern is that we will no longer be able to obtain logs owned by root. What is the solution here?