Hello. We have a large number of devices that send syslog to Splunk that we need to ingest. All devices and Splunk is on premis. There are many different type of syslog messages that we need to collect. As an example some of the source types are:
All devices currently send syslog to the same IP address and UDP port 514.
Currently we manage this by having an rsyslog configration that is shared across servers using a puppet config. This allow us to edit the syslog configuration on one server and have it pushed out to all other servers.
The rsyslog.conf file
The various files are written to disk and then an inputs.conf file is automatically updated to ensure that the file is ingested into Splunk. The file and directory path allows us to determine the index the data is written to and the sourcetype.
This works, but is quite complex. The servers are currently based on centos 6 which is end of life in November.
How do other people collect and manage syslog in their environments?
Thanks for your help.
Does anyone have any experience of running "Splunk Connect 4 Syslog" SC4S within an enterprise environment?
Looking at the Splunk Connect 4 Syslog (SC4S) Solution.
Thank you the referenced blog seems interesting.
Hi
As @inventsekar pointed, you should definitely use something else that Splunk’s own TCP/UDP stream receivers. Use some real syslog server, clustered or not it's not so big issues and try to change sources to use at least TCP, TLS or even RELP as sending protocol. Those ensure better that you don't lost events. Also it's mandatory to use correct profiles on LB (e.g. in F5 FastL4 is needed) or otherwise those lost some event.
Mostly used are syslog-ng and rsyslog.
r. Ismo
Hi @davidwaugh
As per my little knowledge on Syslog, many suggested that syslog-ng is the best idea to
consider..
not sure if you already know about syslog-ng, i just to update you this blog. thanks.
Thanks we already use rsyslog. I was just wondering if there was an easier way which didnt involve lots of configuration files.