Getting Data In

How come when I send Syslog info via UDP to local universal forwarder, the host is always 127.0.0.1?

Explorer

Hi all,

I've just stumbled across this issue. I have a linux host running rsyslogd. When I forward my events to the local non-priviledged Splunk forwarder via TCP, everything is fine. However when I change the stream to UDP, the hostname in the events will always be set to 127.0.0.1 in Splunk. Even though I can see the proper hostname in the raw events.

The strange thing is when I open the udp:1514 directly on the indexer and forward the events to this data input, it works just fine again.

To make it short:
TCP -> u.forwarder: hostname set properly
UDP -> u.forwarder: hostname 127.0.0.1
TDP -> indexer: hostname set properly
UDP -> indexer: hostname set properly

Here's my inputs.conf on universal forwarder

[default]
host = webserver.labcorp.lan

[udp://1514]
sourcetype=syslog
disabled=false
index = unix

The host will always be ignored until I switch the stanza to TCP.

This is the inputs.conf on the indexer:

[udp://1514]
connection_host = dns
index = network
sourcetype = pfsense
disabled = 0

[splunktcp://9997]
connection_host = ip

Is there something I'm missing?

Thanks!

1 Solution

Ultra Champion

Which is correct right? You received that message from localhost.

There is actually another thing in play here, which also explains that difference between UDP and TCP.

When sending UDP syslog to Splunk, Splunk will add an extra timestamp and host header in front of the original syslog message (based on where it received the event from). And consequently, the syslog host extraction, will take the hostname from there, resulting in "localhost". For TCP this doesn't happen.

Two ways to fix this:
- disable this extra header feature: no_appending_timestamp = true in inputs.conf
- don't send from rsyslog over the network to Splunk. This is actually the best option. Just let rsyslog write to disk and then configure splunk to monitor those files. This is the best practice way of handling syslog data. Those files created by rsyslog serve as a cache / buffer in case of downtime on the splunk side of things. It also enables smart processing like writing to host specific files, allowing splunk to pick up the hostname from the file/folder name instead of having to extract it from the event.

View solution in original post

0 Karma

Ultra Champion

Which is correct right? You received that message from localhost.

There is actually another thing in play here, which also explains that difference between UDP and TCP.

When sending UDP syslog to Splunk, Splunk will add an extra timestamp and host header in front of the original syslog message (based on where it received the event from). And consequently, the syslog host extraction, will take the hostname from there, resulting in "localhost". For TCP this doesn't happen.

Two ways to fix this:
- disable this extra header feature: no_appending_timestamp = true in inputs.conf
- don't send from rsyslog over the network to Splunk. This is actually the best option. Just let rsyslog write to disk and then configure splunk to monitor those files. This is the best practice way of handling syslog data. Those files created by rsyslog serve as a cache / buffer in case of downtime on the splunk side of things. It also enables smart processing like writing to host specific files, allowing splunk to pick up the hostname from the file/folder name instead of having to extract it from the event.

View solution in original post

0 Karma

Explorer

Thanks for the clarification, this works fine!

0 Karma

SplunkTrust
SplunkTrust

Hi dkrey,

one thing that is different here, is that you use connection_host = dns on the indexer and not on the uf in inputs.conf. Could be a reverse DNS issue here ...

cheers, MuS

0 Karma

Explorer

Hi MuS,
thanks for the reply. Setting connection_host to dns made a difference. But unfortunately only that 127.0.0.1 got resolved to localhost.

~Dirk

0 Karma