Hi, we have Splunk (v9.2) in a clustered environment that manages tons of different logs from a complex and varied network. There are a few departments that have a Sophos firewall each, that sends logs through syslog (we would have used a UF, but we couldn't because the IT security can't touch those servers). In order to split the inputs based on the source type, we set those Sophos logs to be sent to port 513 of one of our HFs and created an app to parse those through the use of a regex. The goal was to reduce the logs and save license usage. So far, so good... Everything was working as intended... Until... As it turns out, every night, exactly at midnight, the Heavy Forwarder stops the collection from those sources (only those) and nothing is indexed, until someone gives a restart to the splunkd service (which could be potentially never) and gives new life to the collector. Here's the odd part: during the no-collection time, tcpdump shows the reception of syslog data through the port 513, so the firewall never stops sending data to the HF, but no logs are indexed. Only after a restart we can see the logs are indexed again. The Heavy Forwarder at issue sits on top of a Ubuntu 22 LTS minimized server edition. Here are the app configuration files: - inputs.conf [udp:513]
sourcetype = syslog
no_appending_timestamp = true
index = generic_fw - props.conf [source::udp:513]
TRANSFORMS-null = nullQ
TRANSFORMS-soph = sophos_q_fw, sophos_w_fw, null_ip - transforms.conf [sophos_q_fw]
REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".*
DEST_KEY = queue
FORMAT = indexQueue
#
[sophos_w_fw]
REGEX = hostname\sulogd\[\d+\]\:.*action=\"accept\".*initf=\"eth0\".*
DEST_KEY = _MetaData:Index
FORMAT = custom_sophos
#
[null_ip]
REGEX = dstip=\"192\.168\.1\.122\"
DEST_KEY = queue
FORMAT = nullQueue We didn't see anything out of the ordinary in the pocesses that start at midnight on the HF. At this point we have no clue about what's happening. How can we troubleshoot this situation? Thanks
... View more