We are in a transition from the "legacy" farm to the new one. During this transition period, the clients would like to send the same data to different farms with different index names. Is it possible?
After all the outputs.conf
files get combined...
I think your best bet may be to have your universal forwarder(s) send to two heavy forwarders thusly:
[tcpout]
defaultGroup=heavyforwarder1,heavyforwarder2
[tcpout:heavyforwarder1]
server=10.1.1.197:9997
[tcpout:heavyforwarder2]
server=10.1.1.200:9997
Each heavy forwarder would be configured to rewrite _MetaData:Index
appropriately via this type of config:
props.conf:
[default]
TRANSFORMS-sendalltocorrectindex = sendalltocorrectindex
transforms.conf:
[sendalltocorrectindex]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = index1
(with heavyforwarder2 using different value for FORMAT to specify the valid index for that site/farm/whatever)
The key is that the duplicate feeding needs to occur at the Universal Forwarder, specifically because UFs don't cook data. Once the data is cooked, it stays cooked, and additional indexers, heavy forwarders, whatever won't do any further props/transforms on it (that's not 100% true, but it's an exception and ugly configuration to make it false).
I think your best bet may be to have your universal forwarder(s) send to two heavy forwarders thusly:
[tcpout]
defaultGroup=heavyforwarder1,heavyforwarder2
[tcpout:heavyforwarder1]
server=10.1.1.197:9997
[tcpout:heavyforwarder2]
server=10.1.1.200:9997
Each heavy forwarder would be configured to rewrite _MetaData:Index
appropriately via this type of config:
props.conf:
[default]
TRANSFORMS-sendalltocorrectindex = sendalltocorrectindex
transforms.conf:
[sendalltocorrectindex]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = index1
(with heavyforwarder2 using different value for FORMAT to specify the valid index for that site/farm/whatever)
The key is that the duplicate feeding needs to occur at the Universal Forwarder, specifically because UFs don't cook data. Once the data is cooked, it stays cooked, and additional indexers, heavy forwarders, whatever won't do any further props/transforms on it (that's not 100% true, but it's an exception and ugly configuration to make it false).
Gorgeous solution @micahkemp. Took us a couple of months to test it ; - )
It's interesting about the underscore difference between DEST_KEY = _MetaData:Index
and DEST_KEY = MetaData:Host
.
Very interesting @micahkemp!!
Maybe I completely miss the point but let's say I have two apps in the UF, each monitors the same data, in the inputs.conf
we'll specify the specific index names and in each app's outputs.conf
we'll specify the indexers of the proper farm.
Will it work?
You’re going to have issues monitoring the same file twice. Splunk will realize it’s already been indexed and skip it.
Gorgeous @micahkemp! Is there a way to avoid it? or "just" two forwarders on the same server...
I’m not saying this is the only way, but it’s the only way I could think of.
Maybe another option would be to have a [default]
props entry pointing to a transform to change the index on the indexers. If you go this route, maybe you do this on the old indexers and change your inputs on the forwarders to point to the new index. This way you leave your complexity on the indexers that will be retired.
Makes perfect sense @micahkemp. This should do it.
You could do all of the posted configuration on the indexers, I just kinda assumed you didn’t want to due to the nature of your question. The configs should be helpful either way, the key is the events need to be sent from a Universal Forwarder to two different Splunk instanes, which is where the index change occurs.
Perfect - thank you.