Hi Splunkers,
I have an HWF that collects the firewall logs. For cost-saving reasons, some events are filtered, not injected into the indexer. For example, I have
props.conf
[my_sourcetype]
TRANSFORMS-set = dns, external
and transforms.conf
[dns]
REGEX ="dstport=53"
DEST_KEY = queue
FORMAT = nullQueue
[external]
REGEX = "to specific external IP range"
DEST_KEY = queue
FORMAT = nullQueue
So my HWF drops those events and the "rest" is ingested to the indexer. (on-prem). - so far so good...
One of our operational teams requested that I ingest "their" logs to their Splunk Cloud instance.
How I can technically do this?
1. I want to keep all the logs on the on-prem indexer with the filtering
2. I want to ingest events from a specific IP range to Splunk Cloud without filtering
BR,
Norbert
Every solution based on CLONE_SOURCETYPE quickly gets ugly because CLONE_SOURCETYPE is not discrimintative so you have to not only process both event streams duplicating your definitions for particular sourcetype so the index-time settings are applied also for the new sourcetype you also have to rewrite the sourcetype at the end to the old one. And have to filter both streams to only work on subsets of the events. Very, very ugly and quickly gets unmaintainable. And if you by any chance manage to make a loop, you'll crash your splunkd.
There is probably another way but the easiest way around it is to set up an intermediate forwarder (a UF so that it doesn't do any parsing, filtering and whatnot) with one input and two outputs and just send both to Cloud as well as to your HF from this "cloud all" environment.
If your FW sends those logs via syslog feed, then it's probably easier to add e.g. rsyslog where those are sent and do filtering/forwarding there instead of use splunk transforms.conf for that?
it's a Fortianalyzer via a custom TCP port. Probably the simplest solution will be configuring a new log forwarding directly on FAZ with filtering.
Thanks for the help!