Getting Data In

UF config works on CentOS/RedHat, but not on Ubuntu server

grijhwani
Motivator

Looking for suggestions for the obvious that I might have overlooked as to why a UF config distributed by Deployment Server (and known to reach all endpoints) works on CentOS and RedHat, but despite the fact the fact the UF on Ubuntu is definitely communicating with the target indexer (as evidenced by tcpdump) it only seems send occasional keep-alives, and no logs.

The config includes a monitor for /var/log, so despite being different platforms, there should be some activity on all of them.

I have been scratching my head over this for two weeks, now.

Later edit -

I omitted to mention that in the splunkd log I see (only once):

TailingProcessor - Could not send data to output queue (parsingQueue), retrying...
TcpOutputProc - Connected to idx=x.x.x.x:yyyy

The indexer address and port are correct.

Tags (2)
0 Karma
1 Solution

grijhwani
Motivator

The difference between the machines is that the Ubuntu ones have - in most cases - had the default SplunkUniversalForwarder app bundle deleted. It seems that the handful of newer ones (which have not) are working fine. So the obvious conclusion is that there is something critical missing from the tailored configurations which is present in the UF config installed by default.

Now to find out what, and determine how to fix it without being reliant on the default configuration.

View solution in original post

0 Karma

grijhwani
Motivator

The difference between the machines is that the Ubuntu ones have - in most cases - had the default SplunkUniversalForwarder app bundle deleted. It seems that the handful of newer ones (which have not) are working fine. So the obvious conclusion is that there is something critical missing from the tailored configurations which is present in the UF config installed by default.

Now to find out what, and determine how to fix it without being reliant on the default configuration.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Are you seeing any data forwarded into _internal on your indexers?

0 Karma

grijhwani
Motivator

Something has come to light. Some more machines I added yesterday ARE reporting in.

0 Karma

acharlieh
Influencer

Is the splunkd process for the UF running as the "root" user on all systems or is it running as a limited "splunk" or other user on some or all? If running as a limited user, are there permission differences on /var/log and the files within? (this may manifest and you may see "Permission denied" messages in the _internal index. ($SPLUNK_HOME/var/log/splunk/splunkd.log)

0 Karma

grijhwani
Motivator

This is actually one of my major bug-bears about Splunk - that it assumes root privelege by default, rather than spawning a root capable agent for parsing local system files, and privelege-dropping everything else.

Nope. Running as root across the board.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Tech Talk Recap | Mastering Threat Hunting

Mastering Threat HuntingDive into the world of threat hunting, exploring the key differences between ...

Observability for AI Applications: Troubleshooting Latency

If you’re working with proprietary company data, you’re probably going to have a locally hosted LLM or many ...

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...