I'm currently sending logs from a UF > HF > two indexer clusters.
I have the need to set the index name at the indexing layer, since the name of the index will be different, depending on the indexer cluster.
I tried putting the following props and transforms at the indexing layer:
props.conf:
[my:sourcetype]
TRANSFORMS-route_to_new_index = set_new_index
transforms.conf:
[set_new_index]
SOURCE_KEY = MetaData:Sourcetype
DEST_KEY =_MetaData:Index
REGEX = (sourcetype::my:sourcetype)
FORMAT = new_index
This does not change the index, however placing these same props and transforms on the HF does change the index. This doesn't help me though since the index name needs to be set after the data is split off to each indexer cluster. Is it really not possible to do this at the indexer layer when a HF is involved? Any other suggestions on how to accomplish the index rename?
Thanks
Hi ehowardl3,
first you need to have two separate tcpout
entries in outputs.conf
of your HWF:
[tcpout]
defaultGroup = cluster1
[tcpout:cluster1]
server = <ip address>:<port>
[tcpout:cluster2]
server = <ip address>:<port>
Next you need a props.conf
like the one you had with an additional line:
[my:sourcetype]
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 002-route_to_cluster2
then in transforms.conf
you set it up like this:
[001-route_to_new_index_cluster2]
DEST_KEY =_MetaData:Index
REGEX = .
FORMAT = my_new_index # set to what ever you want
[002-route_to_cluster2]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster2 # which is the name from outputs.conf
The option defaultGroup=cluster1
in outputs.conf
will send all data unchanged to cluster1
.
After applying this config you need to restart the HWF.
Hope this helps ...
cheers, MuS
Refer to MuS's feedback but "Is it really not possible to do this at the indexer layer when a HF is involved?", not under normal circumstances. Once data is parsed it is not re-parsed at the next tier, so transforms do not apply twice.
While there are advanced tricks to "recook" the data it would not make sense for your scenario.
@gjanders - thanks for the info!
Hi ehowardl3,
first you need to have two separate tcpout
entries in outputs.conf
of your HWF:
[tcpout]
defaultGroup = cluster1
[tcpout:cluster1]
server = <ip address>:<port>
[tcpout:cluster2]
server = <ip address>:<port>
Next you need a props.conf
like the one you had with an additional line:
[my:sourcetype]
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 002-route_to_cluster2
then in transforms.conf
you set it up like this:
[001-route_to_new_index_cluster2]
DEST_KEY =_MetaData:Index
REGEX = .
FORMAT = my_new_index # set to what ever you want
[002-route_to_cluster2]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster2 # which is the name from outputs.conf
The option defaultGroup=cluster1
in outputs.conf
will send all data unchanged to cluster1
.
After applying this config you need to restart the HWF.
Hope this helps ...
cheers, MuS
Thank you! This makes sense and should work for me. So if there are two different source types that I need to send to two different indexes, I'm assuming the props and transforms would look something like this:
transforms.conf:
[001-route_to_new_index_cluster2]
SOURCE_KEY = MetaData:Sourcetype
DEST_KEY =_MetaData:Index
REGEX = (sourcetype::my:sourcetype1)
FORMAT = my_new_index1
[002-route_to_new_index_cluster2]
SOURCE_KEY = MetaData:Sourcetype
DEST_KEY =_MetaData:Index
REGEX = (sourcetype::my:sourcetype2)
FORMAT = my_new_index2
[003-route_to_cluster2]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster2 # which is the name from outputs.conf
props.conf:
[my:sourcetype1]
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 003-route_to_cluster2
[my:sourcetype2]
TRANSFORMS-002-route_to_new_index_cluster2 = 002-route_to_new_index_cluster2, 003-route_to_cluster2
Correct?
Thanks!
@MuS, one thing that has me a little confused - since props.conf calls out the sourcetype and then routes it to cluster2, won't that catch all the data of that sourcetype instead of splitting the data and sending it to cluster2 and to the default group?
That's where the defaultGroup = cluster1
in outputs.conf
kicks in, it will send ANY data to the target.
But you can also remove the defaultGroup = cluster1
and do something like this:
props.conf
[my:sourcetype1]
TRANSFORMS-000-route_to_cluster = 000-route_to_cluster1
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 003-route_to_cluster2
[my:sourcetype2]
TRANSFORMS-000-route_to_cluster = 000-route_to_cluster1
TRANSFORMS-001-route_to_new_index_cluster2 = 002-route_to_new_index_cluster2, 003-route_to_cluster2
transforms.conf
[001-route_to_new_index_cluster2]
DEST_KEY =_MetaData:Index
REGEX = .
FORMAT = my_new_index1
[002-route_to_new_index_cluster2]
DEST_KEY =_MetaData:Index
REGEX = .
FORMAT = my_new_index2
[000-route_to_cluster1]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster1 # which is the name from outputs.conf
[003-route_to_cluster2]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster2 # which is the name from outputs.conf
Hope that make sense ...
cheers, MuS
Perfect. Thanks for the clarification.
except the REGEX
, just use .
because the props.conf
already limits the use of the transforms to specific sourcetypes 😉
Ah yeah, good point. Thanks again!
yep, that should work
@ehowardl3 is the data split/cloned to both idx clusters or load balanced?
@MuS, thanks for your time. The data is split/cloned to both idx clusters.
please hold, this can be done on the HWF 😉