A customer has a heavy forwarder (A) that is forwarding logs to my local heavy forwarder (B). I have no control over heavy forwarder A and would like to use props.conf to perform source and sourcetype specific processing/rewriting (fx using SEDCMD) on heavy forwarder B before sending the events to a syslog server.
I am having difficulties in getting the forwarded events to go through local processing - if I use _SYSLOG_FORWARDING in the inputs.conf - then the events seem to bypass the local processing and go directly to the output.
I have tried to specify queue = parsingQueue (even though this is the default) but it doesn't seem to have any effect.
How can I get the event forwarded from the customer heavy forwarder A to go through the processing stages on heavy forwarder B?
This is not effecting transforms, just line-merging or timestamp recognition etc. (parsing and aggregation Queue)
See: https://wiki.splunk.com/Community:HowIndexingWorks
If you want to "re-parse" Events you can try the setting in inputs.conf like suggested here:
FYI, what you're looking to do is very easy to do in Cribl. You can point already cooked data at us and transform it how you see fit before delivering it out to syslog, or to any other system we support.
This is an endorsement by a Cribl employee. As a previous user of Cribl, I would not recommend it.
This is not effecting transforms, just line-merging or timestamp recognition etc. (parsing and aggregation Queue)
See: https://wiki.splunk.com/Community:HowIndexingWorks
If you want to "re-parse" Events you can try the setting in inputs.conf like suggested here:
The answer in given in https://answers.splunk.com/answering/275684/view.html did the trick - the local processing is now active and SEDCMD in the props.conf are working now.
I wondered why that post had popped back up. That approach is unsupported and a bit risky, but glad it's working for you.
Something new for me, never seen this.
As far as I know you can't because when data processed by Heavy Forwarder A, it become cooked data and when it reaches to Heavy Forwarder B it will not process again because it is already cooked data (Only first Splunk Enterprise Instance process the data and next instances either pass it to other tier or if it is indexer then it stores data but do not process it again)
Would it be possible to forward data in a way from heavy forwarder A (fx as "uncooked") so that processing could be done on heavy forwarder B?
I am afraid you can't
Have you tried setting the _SYSLOG_ROUTING via props/transforms as suggested here:
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad
Can you please give a config example of your props/transforms which is not working as expected?
Yes - I have tried to add a default clause in props.conf to change the routing via another path defined in transforms.conf and outputs.conf - but it has no effect...
I had problems using the [default] stanza, too. You can try this:
You can also define global settings outside of any stanza, at the top of the file.