As we work on the migration to the cloud, we have the following case -
We are sending the syslog data to a heavy forwarder on the cloud (and to the on-prem indexers), to its 9997 port. When reaching this HF, we would like to fork just the firewall data to a subset of the indexers. Is it possible to make such a routing?
We would like to have something like -
[<transforms_stanza_name>]
SOURCE=index
REGEX=^firewall
DEST_KEY=_TCP_ROUTING
FORMAT=<subset of cloud indexers>
Hi @gcusello
The data that I'm referring to, is cooked data that we ingest into the HF via the 9997 port. Can this data be routed from the HF to specific cloud indexers?
Sorry if I wasn't clear earlier.
Hi @danielbb,
it's a best practice to use at least two HFs as concentrators to send logs to Splunk Cloud.
Obviously, transformations must be implemented on these HFs.
Ciao.
Giuseppe
Hi @danielbb,
yes it's correct.
Obviously, you have to do all the steps described at https://docs.splunk.com/Documentation/Splunk/9.0.0/Forwarding/Routeandfilterdatad#Filter_and_route_e...
In other words, you have also to insert the two destinations in outputs.conf and insert in the props.conf the relative stanza.
Ciao.
Giuseppe
Hi @gcusello
Great explanation on this page -
Heavy forwarders can filter and route data to specific receivers based on source, source type, or patterns in the events themselves. For example, you can send all data from one group of machines to one indexer and all data from a second group of machines to a second indexer. Heavy forwarders can also look inside the events and filter or route accordingly. For example, you could use a heavy forwarder to inspect WMI event codes to filter or route Windows events. This topic describes a number of typical routing scenarios.
Hi @danielbb,
if one answer solves your need, please accept one answer for the other people of Community or tell me how I can help you.
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉