Currently, I have all the syslog data sending to Port 514. I want to be able to better organize this data by its host. We have Splunk ver. 6.6.3. I've read some different answers, and I think the solution is in sourcetype overriding. How do I perform sourcetype overriding within the UI? Everytime I try to edit the input, the port 514 can't be used more than once, and an error is produced.
Specifically, this has to do with Fortinet Fortigate App for Splunk. There are data input instructions in the details. The app does not recognize the data. The data is in our indexer through syslog. The NOTE from the app is below.
Note: From version 1.2, the Splunk TA(Add-on) for fortigate no longer match wildcard source or sourcetype to extract fortigate log data, a default sourcetype fgt_log is specified in default/props.conf instead, please follow the instruction below to configure your input and props.conf for the App and TA(Add-on).
Through Splunk Web UI:
Option1: Adding a UDP input
Port: 514 (Example, can be modified according to your own plan)
Sourcetype: fgt_log (Example, can be modified according to your own plan but need to match the sourcetype stanza in props.conf)
--> This won't let me change or add a new input due to the port already being used.
Option2: Adding a file input
Settings->Data Input->Files & Directories
Browse: Select the file directory
Select sourcetype: if fgtlog is not created yet, click Save As -> Name:fgtlog
Leave others unchanged and save.
--> I don't know what file directory needs to be selected.
Basically - all data that comes to a certain UDP/TCP port looks the same to Splunk, so you can only give it a single sourcetype for everything that comes in.
Best practice would be to fire up a syslog server (like syslog-ng), and have it write all data to disk, split by hostname/IP.
You can then create file monitor inputs for single device, with it's own sourcetype and other settings.
There are a few examples for best practices out there, e.g. here.
If that's not possible at all, try this:
You'll have to seperate it by the only identifier you get - the hostname/IP address of the sender.
Set up a props.conf like this:
[host::your_hostname_or_IP] TRANSFORMS-rewrite-sourcetype1 = rewrite-sourcetype1
and a transforms.conf like this:
[rewrite-sourcetype1] FORMAT = sourcetype::fgt_log DEST_KEY = MetaData:Sourcetype
you don't have access to any of the servers? indexer or an additional dedicated server/vm just for receiving syslog and save them to different files by hosts?
then it is hard for splunk to turn one input into different sourcetypes at input phase. you can write rules in splunk for that but that will bypass our processing in the add-on.
splitting the input and rewriting sourcetype somehow will bypass the props.conf in our add-on. because we need to rewrite fgtlog to fgttraffic, fgt_utm... at props.conf stage. can splunk chain/pipeline those processes?
Yeah, that's right, if it does index-time stuff, that won't happen, because the rules that are applied are only determined once for the very first sourcetype that the data is ingested with. You can rewrite the sourcetype at index-time, but it won't use the new sourcetype's props rules - so yeah, it doesn't fix your problem.
In that case, writing that stuff to disk and using proper file monitors seems to be the only way to do it right.