Client's F5 Load Balancer is writing data to our Splunk Syslog Heavy Forwarder, but when searching in Splunk Search Head the data is incomplete/missing. Did a packet capture (tcpdump) on Syslog server from the F5 Load Balancer and copied the syslog-ng for the F5 host.
Assumption is the Syslog server is receiving all the syslog messages sent from the F5 host, but syslog-ng is not writing all of them to file. In the packet capture, the Syslog server received 800+ syslog messages, but only wrote 68 syslog messages to file.
Any suggestion as to why this is happening? Or any suggestion how to torubleshoot this issue?
Syslog can be tricky to configure and maintain. Splunk's own tcp/udp inputs are not very efficient and reliable and are not recommended for production use. Any intermediate syslog solution (rsyslog, syslog-ng) must properly configured to be able to cope with the amount of data you're going to throw on it. If your syslog-ng doesn't write all messages that show up on the network interface it means you have to check your syslog-ng config.
Yes, switching to tcp might help in some circumstances but you'd probably want to check your queues, maybe some thread limits and so on. If you're using files as an intermediate storage from which you're reading with the forwarder, you might check your i/o performance because that also can be blocking message processing. So there's no general good-for-all answer. You have to make sure your syslog layer performs efficiently. It's not a splunk problem as such.