Good evening everyone,
we have a problem in a Splunk cluster, composed of 3 indexers, 1 CM, 1 SH, 1 Deployer, 3 HF, 3 UF.
The UFs receive logs from different Fortinet sources via syslog, and write them to a specific path via rsyslog. Splunk_TA_fortinet_fortigate is installed on the forwarders.
These logs must be saved to a specific index in Splunk, and a copy must be sent to two distinct destinations (third-party devices), in two different formats (customer needs).
Since the formats are different (one of the two contains TIMESTAMP and HOSTNAME, the other does not), via rsyslog they are saved to two distinct paths applying two different templates.
So far so good.
The issues we have encountered are:
- Some events are indexed twice in Splunk
- Events sent to the customer do not always have a format that complies with the required ones
For example, in one of the two cases the required format is the following:
<PRI> date=2024-09-12 time=14:15:34 devname="device_name" ...
But looking at the sent packets via tcpdump, some are correct, others are in the format
<PRI> <IP_address> date=2024-09-12 time=14:15:34 devname="device_name" ...
and more in the format
<PRI> <timestamp> <IP_address> date=2024-09-12 time=14:15:34 devname="device_name" ...
The outputs.conf file is as follow:
[tcpout]
defaultGroup = default-autolb-group
[tcpout-server://indexer_1:9997]
[tcpout-server://indexer_2:9997]
[tcpout-server://indexer_3:9997]
[tcpout:default-autolb-group]
server = indexer_1:9997,indexer_2:9997,indexer_3:9997
disabled = false
[syslog]
[syslog:syslogGroup1]
disabled = false
server = destination_IP_1:514
type = udp
syslogSourceType = fortigate
[syslog:syslogGroup2]
disabled = false
server = destination_IP_2:514
type = udp
syslogSourceType = fortigate
priority = NO_PRI
This is the props.conf:
[fgt_log]
TRANSFORMS-routing = syslogRouting
[fortigate_traffic]
TRANSFORMS-routing = syslogRouting
[fortigate_event]
TRANSFORMS-routing = syslogRouting
and this is the trasforms.conf:
[syslogRouting]
REGEX=.
DEST_KEY=_SYSLOG_ROUTING
FORMAT=syslogGroup1,syslogGroup2
Any ideas?
Thank you,
Andrea
I think you're overthinking it. You already have those events in rsyslog as you're using it to receive the events in the first place. So instead of saving it to files and then bending over backwards sending them over syslog, just send them directly from those rsyslogs to final destinations. Rsyslog is very flexible with templates for sending data away.
Hello,
we configured rsyslog and it is now receiving logs from appliances, saves them locally to disk and send the copies to the remote destinations on client side.
But we have now problems with indexing, as far as data is not being received anymore from the HFs.
I think the UFs are undersized to perform all of these activities.
Is there a way to check if we have a performance problem now?
Thank you,
Andrea
Sorry, mate, but the level of completness of your description is comparable to "ok, I replaced the flat tyre but now I cannot put the car in gear - can it be a problem with a battery?".
We nave no idea about what your setup looks like, what hosts you have, what configs. How can we know what's wrong?
Thank you @PickleRick , we'll try and use rsyslog instead of Splunk to forward the logs and let you know if we solved the issue.
Can you please tell me what do you think about the duplicate events in the index?
What should I investigate?
Thank you,
Andrea
Since you have performed a tcpdump on the incoming packet you know that the format is based upon source configuration or by application design.
1) Can you correlate the format to specific hosts leading me to believe a configuration at the source as a possible solution
2) Do the specific hosts have different versions of fortinet installed, perhaps the vendor has modified the syslog message format between releases (unlikely but not impossible)
3) Are the message formats correlated to specific message types or services within the fortinet services, this would be less likely fixable via configuration.
The RFC for syslog was more of a suggestion than a hard rule and this is why vendors and applications don't often have a standard implementation of the recommended fields. You have the potential through rsyslog destination configurations the opportunity to manipulate the output to add fields or clone fields into a specific order but most security rules indicate you should not modify logs in transit. That might not be an issue for your implementation so you can look into it.
The duplicate ingestion is interesting and much harder to pinpoint. Depending upon frequency can you get a tcpdump to indicate if the message was generated at the source twice or did the UF monitoring the file have a hiccup.