Hello,
We have multiple Cisco Switches that are configured to send logs to Splunk. When comparing the logs on the switch and the logs in Splunk, they do not match up. Splunk does not seem to catch all of the logs, and seems to miss entries in large chunks, and it does not seem to be any single type of entry. I've searched by the IP of the switch and the information in the log thinking that it might have been mislabeled, but it is not in Splunk at all.
We have our switches set up to log at an informational level. This is happening across most switches in our environments - not all logs are entering Splunk. Is this is a known issue?
Thanks!
How does the data get from the switches to Splunk? Are they sent via syslog? Do the events go directly to Splunk or to a syslog server? Are the sent using TCP or UDP? Some configurations are more likely to lead to data loss than others.
Another possibility is the data is getting to Splunk, but is onboarded poorly so events cannot be located.
We have the switches configured to send via Kiwi syslog - the syslog server is also installed on the Splunk server. We have the Data Inputs in Splunk listening on 514 TCP and UDP, with a source type of syslog. TCP is also listening on 601 with a source type of cisco_syslog.
The switch shows (show logging command):
Syslog logging: enabled
Trap Logging Informational, 245 message lines logged
Logging to <Splunk/KiwiIP> (udp port 514, audit disabled, link up)
...
Logging to <Splunk/KiwiIP> (tcp port 601, audit disabled, link up)
...
Sending syslog directly to Splunk is discouraged. Best Practice is to have the syslog server write the data to disk files and have Splunk monitor those files. Another option is to use the Splunk Connect for Syslog (SC4S) app.
Thank you, I will take a look at our set up and see if we can get this updated.