Getting Data In

Why are events coming in over source:tcp-ssl not assigned a hostname?


Dear Splunkers

Recently we reconfigured our remote syslog clients to deliver their logs over source:tcp-ssl instead of source:tcp.

Since then the events are not assigned the configured hostname anymore.
Instead, the host field contains the source ip address of the originating client.

inputs.conf @ indexer:

$ /splunk/bin/splunk btool inputs list tcp-ssl:// --debug

/data/splunk/etc/apps/IA-xml/local/inputs.conf  [tcp-ssl://]
/data/splunk/etc/system/default/inputs.conf     _rcvbuf = 1572864
/data/splunk/etc/apps/IA-xml/local/inputs.conf  host = hostname-xy
/data/splunk/etc/apps/IA-xml/local/inputs.conf  index = xml-p
/data/splunk/etc/apps/IA-xml/local/inputs.conf  sourcetype = xml

The fields 'index' and 'sourcetype' are assigned correctly. Only the field 'host' does not seem to catch.

It would be quite ugly to override the host field at index time with transforms.

Any ideas or experiences with this issue?

Thanks a lot & best regards


Tags (5)


I'd recommend not using Splunk to listen directly for syslog, but instead have a syslog server (syslog-ng or rsyslog) listen for syslog and write that to files. Splunk then picks up the files and reads them.

This has a LOT of advantages. It is considered best practice. It makes restarting Splunk not interrupt your syslog inputs for that minute or two. It makes troubleshooting easier by separating the two functions. It makes the various configurations involved simpler. It also increases throughput.

And most importantly, I would be VERY surprised if you continued to have this problem after you convert to syslong-ng and Splunk reading those files.

For what it's worth, you can run the syslog server right on that same box.

See this excellent blog for more information.

0 Karma