We created two indexes at our indexer cluster. Now we need to send the same data to both of them (clear data to the first one and transformed to another one, but from one source, from one univarsal forwarder). How can we implement it? On which host: universal forwarder, heavy forwarder, indexer?
Hi @m_efremov,
As far as I know you can't clone data to 2 indexes on same indexer cluster with data flow from UF -> Indexer directly but there are ugly way to achieve this as given below but it will double your license usage for that source..
Here I am assuming as of now you are sending data directly from Universal Forwarder to Indexer Cluster and Heavy Forwarder is sending data to same Indexer Cluster.
With below approach data flow will be like
UF -> Indexer Cluster (Index = ABC)
Heavy Forwarder -> Indexer Cluster(Index = XYZ)
inputs.conf
[monitor:///tmp/]
_TCP_ROUTING = indexers, heavyforwarder
whitelist = mycustom\.log
index = ABC
sourcetype = mysourcetype
outputs.conf
[tcpout]
defaultGroup = indexers
[tcpout:indexers]
server = indexer1:port, indexer2:port
[tcpout:heavyforwarder]
server = hfw:port
props.conf
[mysourcetype]
TRANSFORMS-rouindex = routing_to_index
transforms.conf
[routing_to_index]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = XYZ
Hi @m_efremov,
As far as I know you can't clone data to 2 indexes on same indexer cluster with data flow from UF -> Indexer directly but there are ugly way to achieve this as given below but it will double your license usage for that source..
Here I am assuming as of now you are sending data directly from Universal Forwarder to Indexer Cluster and Heavy Forwarder is sending data to same Indexer Cluster.
With below approach data flow will be like
UF -> Indexer Cluster (Index = ABC)
Heavy Forwarder -> Indexer Cluster(Index = XYZ)
inputs.conf
[monitor:///tmp/]
_TCP_ROUTING = indexers, heavyforwarder
whitelist = mycustom\.log
index = ABC
sourcetype = mysourcetype
outputs.conf
[tcpout]
defaultGroup = indexers
[tcpout:indexers]
server = indexer1:port, indexer2:port
[tcpout:heavyforwarder]
server = hfw:port
props.conf
[mysourcetype]
TRANSFORMS-rouindex = routing_to_index
transforms.conf
[routing_to_index]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = XYZ
Thank you, @harsmarvania57 , it seems to be a workable solution. My transform.conf also contains "CLONE_SOURCETYPE", but all other options are same.
[routing_to_new_index]
REGEX = .
CLONE_SOURCETYPE = my_new_sourcetype
FORMAT = my_new_index
DEST_KEY = _MetaData:Index
I have converted my comment to answer, if it really helps you then you can accept it. Can I ask you why you want CLONE_SOURCETYPE
?
I use CLONE_SOURCETYPE for assigning different sourcetype name (not only index) for my new data flow. It is because I want do different transformations for old and new data (may be at indexers side, in their props.conf and transforms.conf). Also i was collect separate statistics about old and new sourcetypes (one of them has transformed events)
For renaming of sourcetype and routing data to another index, can you please try below configuration on Heavy Forwarder?
props.conf
[mysourcetype]
TRANSFORMS-rouindex = rename_sourcetype, routing_to_new_index
transforms.conf
[rename_sourcetype]
REGEX = .
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype::new_sourcetype
[routing_to_new_index]
SOURCE_KEY = MetaData:Sourcetype
DEST_KEY = _MetaData:Index
REGEX = new_sourcetype
FORMAT = XYZ
what i understand from your question is, you want to send a single log file to two indexes
.
from @woodcock 's answer on this post -
https://answers.splunk.com/answers/567223/how-to-send-same-data-source-to-two-or-multiple-in-1.html
[monitor://D:\test\test1.log]
sourcetype = test
index = index1
[monitor://D:\linktotest\test1.log]
sourcetype = test
index = index2
The create s symbolic link from linktotest to test:
We can't do it at most of our application servers. Some of them not under our control, some of them are working under MS Windows etc. Thank your for answer but it is not general solution.