Hi @danielbb
I suspect the main reason for this is that 9514 is not a Privileged port, ie Splunk can mount it (ports > 1024) without additional permissions. To mount a port <1024 Splunk would require CAP_NET_BIND_SERVICE capability.
It is common practice for Splunk to listen to ports higher than 1024 for syslog, and people often prefix 514 with another number. Sometimes you will see multiples such as 7514,8514,9514 to receive traffic from different syslog sources.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
And, to add to already provided answers, there is no such thing as syslog meaning a strictly defined protocol. Syslog can mean many different things depending on context and it's definitely not limited to 514 port. It's a perfectly normal situation when "syslog" data is sent to another port.
Adding on to @livehybrid's response, sending TCP/UDP directly to a Splunk instance is discouraged. The reason is any time that instance restarts data is lost. Also, the usual distance between the data source and Splunk increases the chances of UDP data getting dropped.
Hi @danielbb
I suspect the main reason for this is that 9514 is not a Privileged port, ie Splunk can mount it (ports > 1024) without additional permissions. To mount a port <1024 Splunk would require CAP_NET_BIND_SERVICE capability.
It is common practice for Splunk to listen to ports higher than 1024 for syslog, and people often prefix 514 with another number. Sometimes you will see multiples such as 7514,8514,9514 to receive traffic from different syslog sources.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing