Hello,
Due to the requirements of the project I am working on, all events will arrive at Splunk Cloud from SC4S. They will arrive at SC4S from a single syslog server that groups several sources and sends them via RFC5424.
In SC4S, we filter and apply a parser to assign an index and sourcetype to the events before sending them to Splunk Cloud. We normally filter by APPNAME, which allowed us to separate some sources, but now we are receiving events from Linux servers with a multitude of APPNAMEs.
We have managed to group the Linux events that arrive from su, sshd-session, sshd, proftpd, sudo, SSH, usermod, radiusd, and root and send them to Splunk Cloud.
We have installed this AddOn in Splunk Cloud: https://splunkbase.splunk.com/app/833
How can we get the events to be parsed with the information from the AddOn? Can anything be done from SC4S?
Important: we only have the Splunk Cloud instance and SC4S installed on an AWS S3, no other elements.
Best regards.
Translated with DeepL.com (free version)
Hello and thank you,
So why does the SC4S documentation mention that Add-on here?
https://splunk.github.io/splunk-connect-for-syslog/main/sources/base/nix/
Unfortunately, having another syslog-ng server that aggregates everything and sends to SC4S is a project requirement. Just as installing a forwarder on that server isn't feasible.
Regards
The docs call for the add-on to be installed on the search head for processing search-time extractions. If those extractions don't work then verify the data is being received as expected by the add-on. It's possible the aggregating syslog server is changing the events into a format the add-on doesn't support.
SC4S sends data to Splunk Cloud via HEC so add-ons are bypassed.
SC4S is a syslog server (it's a container for syslog-ng) so there's no need for a separate syslog server to aggregate inputs. Letting SC4S do the aggregation may simplify things.