All,
I have a data set that I need in indexclusterA as index=distil. HOW EVER I need that same data in indexclusterB as index=web. Data all flow through a Heavy forwarder.
Any idea how I would do this?
As an alternative to changing the index on the recipient HF/Indexer, you may try using sourcetype cloning. The caveat will be that the sourcetype will end up different on each cluster (although you could put additional config there to change it back).
On the HF,
props.conf
[original_sourcetype]
TRANSFORMS-clone = clone_sourcetype
[sourcetype2]
TRANSFORMS-change_index = change_index
transforms.conf
[clone_sourcetype]
CLONE_SOURCETYPE = sourcetype2
REGEX = .
[change_index]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = web
A very similar thread - Can the universal forwarder send the same data to different farms with different index names?
On the Heavy Forwarder, you would use multiple [tcpout:<target_group>]
, one for indexclusterA, and the other for indexclusterB. The Heavy Forwarder should also have an inputs.conf
file with the input's index set to distill.
On the 2nd index cluster, you have an inputs.conf
with a Splunk TCP input monitor, [splunktcp:9997]
. Under this stanza, add the line, queue = parsingQueue
. This will ensure that props and transforms on the index cluster will apply.
Then, in props.conf
, have the following stanza.
[whatever source/sourcetype/host you want to change]
TRANSFORMS-changeindex = changeindex
And transforms.conf
[changeindex]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = web
You could also change the splunktcp
line to only match the IP address of the Heavy Forwarder, if you have multiple forwarders logging to the indexers.
No way to handle this at the heavy forwarder level? Enabling "queue = parsingQueue" I believe reprocesses cooked data, the results would be I'd have to move over dozens of apps and reprocess the data over and this would be a huge load on my indexers?