Splunk Enterprise

Linux UF not forwarding logs to Indexers

Xander13
Observer

Hi Guys

I have issue for the newly setup HF and UF.

The windows UF’s logs are reaching the Indexers while the Linux UF are not.

Communication is ok between LiNux UF and HF as observed using tcpdump. The linux UF is sending traffics and HF received and process it.

can you help what needs to check on UF or HF?

Labels (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ok. Unless you do something very very strange Splunk component should be reading and indexing or forwarding its own internal logs. That's why I asked about the internal logs. Your output from list monitor shows just that. And it's a normal thing.

I asked how you checked whether you're getting the data or not because it's a fairly typical case when your source has misconfigured time settings (either the clock is not in sync or the timezone is wrongly set up) that the data is actually indexed but at the wrong point in time so when you're  searching for "last 15 minutes" or last few hour it doesn't show in search but the data is there. Just badly onboarded. Try searching for those "not working" hosts over a bigger time range (you could risk all-time especially if you do it with tstats)

| tstats min(_time) max(_time) count where index=_internal host=<your_forwarder_>

I'm assuming your data flow is UF->HF->idx, right? Windows UFs go through the same HFs as linux ones?

Look for information about connection established to the downstream HF on UF's splunkd.log (or errors). If there are errors, look for corresponding errors/warnings on HF's side.

0 Karma

Xander13
Observer

 

From UF, ./splunk list monitor

Xander13_0-1728233644849.png

tcpdump from UF, checking traffic on HF's IP

Xander13_3-1728234624726.png

 

 

Tcpdump from HF, checking traffic of UF's IP

Xander13_2-1728234517278.png

 

 

Xander13_4-1728234759279.png

 

 

 



0 Karma

PickleRick
SplunkTrust
SplunkTrust

ENOTENOUGHINFO

But seriously. Firstly, what does your infrastructure look like? Secondly, do you get _any_ logs from any of your new hosts (including internal indexes)? Thirdly, how did you verify that the data is not ingested? Fourthly, did you do any more troubleshooting or just the tcpdump? Fifthly, what do you see in your tcpdump output? Sixthly, did you check splunkd.log on involved hosts?

0 Karma

Xander13
Observer

1. Infra - UF(Windows and Redhat8.10) and HF (Redhat 9.4) is in Azure. Logs are forwarded to Indexers (Remote - On prem).

2. Windows logs (UF) are received by Indexers.

3. Linux (UF) logs are not received by Indexers.

4. From Linux UF, ./splunk list monitor list all log name to be forwarded.

    Established connection on port on both UF and HF IP address when checking using netstat -an

5. Continuous traffic observed going out from UF to HF (sync and ack on tcpdump)

6. Yes. What exactly to check in splunkd.log?

 

What commands to use to confirm if logs are forwarded from UF to HF. Then HF to Indexer?

 

0 Karma
Get Updates on the Splunk Community!

Fun with Regular Expression - multiples of nine

Fun with Regular Expression - multiples of nineThis challenge was first posted on Slack #regex channel ...

[Live Demo] Watch SOC transformation in action with the reimagined Splunk Enterprise ...

Overwhelmed SOC? Splunk ES Has Your Back Tool sprawl, alert fatigue, and endless context switching are making ...

What’s New & Next in Splunk SOAR

Security teams today are dealing with more alerts, more tools, and more pressure than ever.  Join us on ...