How can I clone data from a HF to two different splunk instances? Doubling defaultgroup in outputs.conf does not work.
#splunk #clone #heavyforwarder
If you want to clone all data from the Heavy Forwarder to both destinations, this should be sufficient.
[tcpout]
defaultGroup = newgroup,oldgroup
[tcpout:newgroup]
server = xxxx.xxxx.xxxx:9997
[tcpout:oldgroup]
server = xx.xx.xx.xx:9997
Regards,
Prewin
If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @lucacaldiero ,
a people from Splunk Support said to me that this solution shouldn't run, but I used it with success:
outputs.conf
[tcpout:newgroup]
server=server = xxxx.xxxx.xxxx.:9997
[tcpout:oldgroup]
server=xx.xx.xx.xx:9997
Don't use defaultGroup.
In this way, you send all logs to both the destinations.
Ciao.
Giuseppe
is something like this correct?
props.conf
[default]
TRANSFORMS-routing=alldata
transforms.conf
[alldata]
REGEX=.
DEST_KEY=_TCP_ROUTING
FORMAT=newgroup
outputs.conf
[tcpout]
defaultGroup=oldgroup
[tcpout:newgroup]
server=server = xxxx.xxxx.xxxx.:9997
[tcpout:oldgroup]
server=xx.xx.xx.xx:9997
In your transforms.conf you would need to have both groups in the _TCP_ROUTING:
[alldata]
REGEX=.
DEST_KEY=_TCP_ROUTING
FORMAT=newgroup,oldgroup
This will also only work for data ingested directly on the UF, or sent to it from a UF as otherwise it would arrive already parsed, however setting both groups in outputs.conf should work I think - how come this didnt work for you?
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing