There are several posts on this already (most are quite old), but I was curious how people approach multiple UDP inputs to a single UDP:514 input? I was hopeful that the following inputs.conf would perform this for me:
[udp://18.104.22.168:514] connection_host = dns index = index1 sourcetype = syslog [udp://22.214.171.124:514] connection_host = dns index = index2 sourcetype = syslog [udp://126.96.36.199:514] connection_host = dns index = index3 sourcetype = syslog
However, the UDP messages are never indexed by Splunk, despite verifying that the packets are indeed being received by the server. ONLY the 188.8.131.52 entry (in the example above) are properly indexed.
Thoughts? I'm hoping to avoid props.conf and transforms.conf if possible. Unfortunately there are limitations in quite a bit of software that will ONLY send syslog data on 514.
i do got same problem
You need to create a dedicated syslog server to capture your UDP traffic and log that data. You will then use a forwarder to forward that data to Splunk
As mentioned, dont use Splunk's UDP, use a syslog server.
But, in regards to your question, here is what is happening.
You have 3 inputs defined on the same port. Splunk wont error out on this, it will apply the configuration. However, it only applies the first one it reads. So as you mention, only the 184.108.40.206 configuration is working.
So if you cannot use a dedicated syslog server, you would need to adjust the incoming ports and redirect your sending hosts accordingly:
[udp://220.127.116.11:514] connection_host = dns index = index1 sourcetype = syslog [udp://18.104.22.168:515] connection_host = dns index = index2 sourcetype = syslog [udp://22.214.171.124:516] connection_host = dns index = index3 sourcetype = syslog
The other option, would be 3 different HF's, on different IP's listening on the same UDP/514... But thats a headache to manage...
If this is not possible can you look into the below answer
It's very unfortunate that software as flexible as Splunk requires 3rd party software to solve a problem that (at first glance) appears to be a key reason that people buy Splunk in the first place.
Are there any recommendations for a barebones syslog server that will accomplish this in a Windows environment? I'm hoping to not need to stand up a syslog server that has analysis tools that would directly compete with Splunk!
This isn't a real Splunk issue. It's a HA concern and issue with UDP.
Think of it this way, you need to upgrade / patch / install and app in Splunk that requires a restart. You restart. It takes 45 seconds for Splunk to restart.
That's 45 seconds that you are missing UDP syslog messages because UDP sends without waiting for an acknowledgement.
Next scenario, need to patch your windows box. Gotta drop services, patch, reboot. There goes 3 hours that you now don't have udp syslog messages coming in because Splunk was down.
Those are two of the primary scenarios of why we recommend a separate Syslog/ UDP collection method. Companies doing more then 100gb a day typically have a nix based robust syslog collection tier in place because they can't loose those logs.
Syslog for Windows.... kiwi works for low volume. Anything large scale you won't have any luck.
Well, I would argue that if UDP HA is the concern that Splunk is trying to address with this limitation in their software, they should:
Besides, wouldn't load balanced Indexers mitigate the risks in your scenario anyway? This feels more like an ongoing oversight than a feature. If you have any whitepapers or other documentation on Splunk's official stance on this I'd love to read them.
You should create a dedicated syslog server and send the UDP traffic to the syslog server. You should then install a Universal Forwarder on the syslog server which will send data to the appropriate index