Splunk Enterprise

Why did Universal forwarder 9.1.0 (linux) change owner?

auradk
Path Finder

I just started rolling out universal forwarder 9.1.0.1 on a few machines. To my horror i noticed that splunk again made a significant change in a minor release. The forwarder is now owner by user "splunkfwd" instead of "splunk".

I can only see this change in https://docs.splunk.com/Documentation/Forwarder/9.1.0/Forwarder/Installanixuniversalforwarder#Instal...

There are no other mention or warning about this.

Am I  the only one who needs to change a significant amount of automation/installation scripts for this change? 

I know tarball is one workaround, but really?

Labels (2)

auradk
Path Finder

Hi Splunkers,

Just an update on the original post, if you are finding this thread.

So after back and forth with support, a few things was fixed: 

First in 9.1.1 they fixed that the owner was forcefully changed to "splunkfwd" from "splunk" during an upgrade.
But that version gave 1000+ warnings about the user splunk being absent.
Then in 9.1.2 the warnings  was  fixed on a fresh install, but they came back when upgrading.
Everything should now be fixed in 9.2.0 and later 
On top of that, they have implemented that if the user "splunk" exist upon installation, "splunk" will be the owner and not "splunkfwd".

So that said, your automation scripts needs to ensure that the "splunk" user exist prior to installation and then everything should be as it use to be. So it is still a change to all the automation out there, but a small one i believe.

Now tell me again why this stunt was necessary, since the "splunk" user will be present if Splunk Enterprise is already installed.... 🙂

EDIT: A few packages have been released since this post was created. I want to correct some of my misunderstandings. I still believe this is a huge mistake, but now the warnings are gone and what will happen during installation and upgrade is bit more clear:

RPM Installation:
The forwarder will use splunkfwd as the owner, no matter what. You can chown the installation folder and change splunk-launch.conf to revert to splunk as the owner. But you have to do it in your script after the rpm installation.


RPM Update: 
The forwarder will retain splunk as the owner if the previous forwarder installation was owned by splunk.

sec1
Engager

I'm still seeing this behavoir in an upgrade to 9.2.1 (via rpm).

This system was running an older version of splunk.  the "splunk" user did exist, and logs were showing up in the indexers/web-search.  

the service is running via systemd.

upon starting it chowned everything to the wrong user (splunkfwd) and it couldn't access its config and exited. lol. 

please splunk, do not force user names or groups names and don't change them during an update!  It is not the (unix) way.  

(don't get me started about the main splunk process being able to modifiy its own config and binaries and execute the altered binaries. that just isn't safe.)

I reverted to a snapshot.  at least splunk runs and logs again.  Unfortunately, this is a compliance failure at modern companies.

Now tell me again why this stunt was necessary.

Tags (1)
0 Karma

auradk
Path Finder

I posted an edit to clarify what i have found so far. Sorry for not doing this earlier.
Depending on how old your forwarder was before upgrade, remember that the direct upgrade to forwarder 9+ is only supported from 8.1.x and higher
That said it I don't think we have seen the end of this yet 😞

0 Karma

isoutamo
SplunkTrust
SplunkTrust
One comment. User creation happened just when installing with package manager. If you are using tar package you must do those user by yourself if you want to use those.

trackerman
Engager

Still struggling to get our 9.1.3 forwarders working on RHEL7 and RHEL8 on DISA STIGed machines after the upgrade.  Nothing I can find online, even here, to help yet.  tcp_conn-open-afux ossocket_connect failed with no such file or directory' messages and SplunkForwarder.service just vanishes.  Really?  Tried yum erase and rm -R /opt/splunkforwarder and new install and still no-go.   Worked before as splunk user.  <aargh!> Worked before the upgrade.  Going back to older version for now since the Cyber Team is really miffed.

Update 1: Well - added splunkfwd account to the root group and made progress, but not 100%.  Will try root:root as experiment - it does appear to be permission issues on STIG locked down machine even  though splunkfwd:splunkfwd owns all /opt/splunkforwarder/ files and directories.

Update2: Running as root has not fixed the issue.  'netstat -an|grep 9997 on forwarder and indexer machines shows connections, 'Forwarder: Deployment' screen shows the non-working forwarders but 'Forwarder Management' screen does not show the forwarders.  The 9.1.2 and 8.2.2.1 (yeah, old - but there are reasons) still work fine forwarding to the 9.1.3 indexer.  Hoping 9.2.0.1 fixes this or I must roll back.

PickleRick
SplunkTrust
SplunkTrust

Well, permissive selinux should not interfere with anything. So that's one less thing to worry about.

0 Karma

at_scale
Engager

Whats the plans on updating the automation?

0 Karma

hajek
Engager

I'm also glad I found this thread, only wish it was sooner.  I just ran into this today when I saw a forwarder on RHEL 7 update from splunkforwarder-9.0.2-17e00c557dc1.x86_64 to splunkforwarder-9.1.2-b6b9c8185839.x86_64.

In my environment and with our Ansible automation we pre-create the splunk user and group so that we maintain the same userid and groupid across all system and don't conflict with users in our LDAP directory.  It has always been a problem that both the forwarder and enterprise used the same name since they expect different home directories (i.e. /opt/splunk vs. /opt/splunkforwarder) which means trying to centralize a splunk user doesn't work well. 

I like the idea of having separate users/groups but this was a surprise change to me and not sure what I am left with at the moment other than a currently broken install while I figure out what the implication are of the 1400+ messages for  "warning: user splunkfwd does not exist - using root" and "warning: group splunkfwd does not exist - using root" are.  Presumably I can just add the splunkfwd user and group and then change ownership and run some invocation of splunk enable boot-start -systemd-managed 1 -user splunkfwd -group splunkfwd to setup the systemd unit file but not something I had planned on doing.  Also not sure if I need to now change the user defined in /opt/splunkforwarder/etc/splunk-launch.conf.

isoutamo
SplunkTrust
SplunkTrust

One hint for home directories. I never create home directory as /opt/splunk or similar for any service user unless that’s mandatory by service/program. It’s much easier to manage that service when you could keep your own stuff outside of distribution directories. Also it’s easy to take temporary conf backups etc under home. 

0 Karma

chli_splunk
Splunk Employee
Splunk Employee

Good question. Since Forwarder 9.0, the "least privilege mode" (run Splunk service as NON ROOT) is by default enabled, whereas Enterprise does not have such feature(yet?). Previously Forwarder and Enterprise share same account `splunk`, so Forwarder creates a dedicated user `splunkfwd` since 9.0 to prevent user permission conflicts.

Today it's very popular to install the Forwarder & Enterprise on the same instance - Install Forwarder in the base image(so that all dockerized instances are monitored by default) to monitor the platform internal metrics such as CPU, Memory, network resources, system files, etc, and install Enterprise to ingest data from external resources, or host indexing/search. 

So this is just a default account change, just like the default user changed from LocalSystem to Virtual Account on Windows since Forwarder 9.1, as a security improvement.

 

0 Karma

auradk
Path Finder

I am sorry, but this sounds like a bad excuse for not thinking this through. I have never seen that it was  popular, recommended or even supported to install the forwarder with the server. If you have any good links on this then please supply.
If the docker people wants this, then create a solution for them, and leave the rest of us alone. 

Imagine all the automation (puppet, ansible, self coded and so on) that now have to be changed.
Monitoring of the user and service needs to be changed. 
There must be a ton of code/checks/monitoring that needs to be changed.  

In regards to when this change was implemented, i did a quick install test (wiped each time): 

rpm -i splunkforwarder-7.3.0-657388c7a488-linux-2.6-x86_64.rpm
- owner & group = splunk

rpm -i splunkforwarder-8.0.4-767223ac207f-linux-2.6-x86_64.rpm
- owner & group = splunk

rpm -i splunkforwarder-8.2.6-a6fe1ee8894b-linux-2.6-x86_64.rpm
- owner & group = splunk

rpm -i splunkforwarder-9.0.0-6818ac46f2ec-linux-2.6-x86_64.rpm
- owner & group = splunk

rpm -i splunkforwarder-9.0.5-e9494146ae5c.x86_64.rpm
- owner & group = splunk

rpm -i splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm
- owner & group = splunkfwd

and just to verify i upgraded from 9.0.5 to 9.1.0.1 and yes the owner changed from splunk to splunkfwd. So be careful out there.
To be fair support said that this should be fixed in coming 9.1.1 - retaining the previous user.

Even the documentation uses "splunk" as the owner all the way from version 9.0 to 9.0.5
https://docs.splunk.com/Documentation/Forwarder/9.0.5/Forwarder/Installanixuniversalforwarder

So i simply don't buy the excuse.

Now if we are installing the 9.1.0.1 and wants to keep using "splunk" as the owner, we will have to manually , make the install, create "splunk" user, update the unit file, chown SPLUNK_HOME to splunk, update SPLUNK_OS_USER=splunk  in splunk-launch.conf and then delete "splunkfwd", According to support.

Just why. 

That said, good or bad reason, it does not change the fact that this is done out of the blue with no prior warning. Same happened with the change from initd/systemd and when you changed the service name. 

Sorry for the rant, it just makes me annoyed that this should have been handled completely different imo.

mykol_j
Communicator

100% -- this is causing all kinds of FUBAR in our organization. I get a half-dozen sales oriented announcements from Splunk -- but they couldn't be bothered to warn us about something that breaks all the forwarders?

isoutamo
SplunkTrust
SplunkTrust

Now this has fixed (read: removed) on 9.1.1.

2023-08-30SPL-242093, SPL-242240Should not create default "splunkfwd" account by Linux RPM/DEB installer during upgrade when Splunk has been managed by another account
0 Karma

auradk
Path Finder

I tested a fresh install of 9.1.1 using splunk documented installation procedure.

It gave me 1400+ warnings with:

warning: user splunk does not exist - using root
warning: group splunk does not exist - using root

If i created splunk user and group first there was no warning. But now i both have splunk and splunkfwd user!!
They should never have change that.

It feels like there are no testing of the rpm before shipping them? The installation might be working, but how can this be the quality of the rpm?

I logged another support case for this. They have confirmed that this is an unresolved issue.

0 Karma

auradk
Path Finder

One small step for man... 
I really hope that a fresh install will be reverted as well.

chadmedeiros
Path Finder

I am so glad I found this thread.

You are completely spot on on everything you said. It is infuriating for us as admins and embarrassing for Splunk as a brand that such major changes are implemented in minor version releases with little to no notice or documentation. 

 

Absolutely ridiculous to change default behavior of installer in a minor release. Period.

 

jotne
Builder

Why did they change that?

I can see that in 9.0.4 it was user splunk.
https://docs.splunk.com/Documentation/Forwarder/9.0.4/Forwarder/Installanixuniversalforwarder#Instal...

9.0.4

 

Create the Splunk user and group.
useradd -m splunk
groupadd splunk

 


9.1.0

 

Create the Splunk user and group.
useradd -m splunkfwd
groupadd splunkfwd

 

If you have added user splunk to various groups and the user changes after an upgrade it may break stuff.

isoutamo
SplunkTrust
SplunkTrust

Hi

nice find! Somehow this is understandable as there is situations when you are running both splunk and UF on the same node. I know that this should avoid, but still I see time by time that this is a reasonable configuration.

It depends on your configuration is this just one parameter on conf file or also some other changes on your automation. But on long run I think that this is a positive change, so we just need to update our automation.

r. Ismo

0 Karma

at_scale
Engager

Have you updated your automation yet to handle this? I can see the puppet module hasnt been updated since 2022.

0 Karma

auradk
Path Finder

Not sure if you have seen it, but i posted an update. Unsure on how make it more visible.

Any way, if you are using puppet, just ensure that the splunk user is created prior to installation. Then it should work fine. But yes it would be nice if the module was updated as well.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...