Getting Data In

UF config works on CentOS/RedHat, but not on Ubuntu server

grijhwani
Motivator

Looking for suggestions for the obvious that I might have overlooked as to why a UF config distributed by Deployment Server (and known to reach all endpoints) works on CentOS and RedHat, but despite the fact the fact the UF on Ubuntu is definitely communicating with the target indexer (as evidenced by tcpdump) it only seems send occasional keep-alives, and no logs.

The config includes a monitor for /var/log, so despite being different platforms, there should be some activity on all of them.

I have been scratching my head over this for two weeks, now.

Later edit -

I omitted to mention that in the splunkd log I see (only once):

TailingProcessor - Could not send data to output queue (parsingQueue), retrying...
TcpOutputProc - Connected to idx=x.x.x.x:yyyy

The indexer address and port are correct.

Tags (2)
0 Karma
1 Solution

grijhwani
Motivator

The difference between the machines is that the Ubuntu ones have - in most cases - had the default SplunkUniversalForwarder app bundle deleted. It seems that the handful of newer ones (which have not) are working fine. So the obvious conclusion is that there is something critical missing from the tailored configurations which is present in the UF config installed by default.

Now to find out what, and determine how to fix it without being reliant on the default configuration.

View solution in original post

0 Karma

grijhwani
Motivator

The difference between the machines is that the Ubuntu ones have - in most cases - had the default SplunkUniversalForwarder app bundle deleted. It seems that the handful of newer ones (which have not) are working fine. So the obvious conclusion is that there is something critical missing from the tailored configurations which is present in the UF config installed by default.

Now to find out what, and determine how to fix it without being reliant on the default configuration.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Are you seeing any data forwarded into _internal on your indexers?

0 Karma

grijhwani
Motivator

Something has come to light. Some more machines I added yesterday ARE reporting in.

0 Karma

acharlieh
Influencer

Is the splunkd process for the UF running as the "root" user on all systems or is it running as a limited "splunk" or other user on some or all? If running as a limited user, are there permission differences on /var/log and the files within? (this may manifest and you may see "Permission denied" messages in the _internal index. ($SPLUNK_HOME/var/log/splunk/splunkd.log)

0 Karma

grijhwani
Motivator

This is actually one of my major bug-bears about Splunk - that it assumes root privelege by default, rather than spawning a root capable agent for parsing local system files, and privelege-dropping everything else.

Nope. Running as root across the board.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...