Getting Data In

Event-based index routing at indexer layer when heavy forwarder is involved

ehowardl3
Path Finder

I'm currently sending logs from a UF > HF > two indexer clusters.

I have the need to set the index name at the indexing layer, since the name of the index will be different, depending on the indexer cluster.

I tried putting the following props and transforms at the indexing layer:

props.conf:

[my:sourcetype]
TRANSFORMS-route_to_new_index = set_new_index

transforms.conf:

[set_new_index]
SOURCE_KEY = MetaData:Sourcetype
DEST_KEY =_MetaData:Index
REGEX = (sourcetype::my:sourcetype)
FORMAT = new_index

This does not change the index, however placing these same props and transforms on the HF does change the index. This doesn't help me though since the index name needs to be set after the data is split off to each indexer cluster. Is it really not possible to do this at the indexer layer when a HF is involved? Any other suggestions on how to accomplish the index rename?

Thanks

1 Solution

MuS
Legend

Hi ehowardl3,

first you need to have two separate tcpout entries in outputs.conf of your HWF:

[tcpout]
defaultGroup = cluster1 

[tcpout:cluster1]
server = <ip address>:<port>

[tcpout:cluster2]
server = <ip address>:<port>

Next you need a props.conf like the one you had with an additional line:

[my:sourcetype]
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 002-route_to_cluster2

then in transforms.conf you set it up like this:

[001-route_to_new_index_cluster2]
DEST_KEY =_MetaData:Index
REGEX = .
FORMAT = my_new_index # set to what ever you want

[002-route_to_cluster2]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster2  # which is the name from outputs.conf

The option defaultGroup=cluster1 in outputs.conf will send all data unchanged to cluster1.

After applying this config you need to restart the HWF.

Hope this helps ...

cheers, MuS

View solution in original post

gjanders
SplunkTrust
SplunkTrust

Refer to MuS's feedback but "Is it really not possible to do this at the indexer layer when a HF is involved?", not under normal circumstances. Once data is parsed it is not re-parsed at the next tier, so transforms do not apply twice.

While there are advanced tricks to "recook" the data it would not make sense for your scenario.

0 Karma

ehowardl3
Path Finder

@gjanders - thanks for the info!

0 Karma

MuS
Legend

Hi ehowardl3,

first you need to have two separate tcpout entries in outputs.conf of your HWF:

[tcpout]
defaultGroup = cluster1 

[tcpout:cluster1]
server = <ip address>:<port>

[tcpout:cluster2]
server = <ip address>:<port>

Next you need a props.conf like the one you had with an additional line:

[my:sourcetype]
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 002-route_to_cluster2

then in transforms.conf you set it up like this:

[001-route_to_new_index_cluster2]
DEST_KEY =_MetaData:Index
REGEX = .
FORMAT = my_new_index # set to what ever you want

[002-route_to_cluster2]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster2  # which is the name from outputs.conf

The option defaultGroup=cluster1 in outputs.conf will send all data unchanged to cluster1.

After applying this config you need to restart the HWF.

Hope this helps ...

cheers, MuS

ehowardl3
Path Finder

Thank you! This makes sense and should work for me. So if there are two different source types that I need to send to two different indexes, I'm assuming the props and transforms would look something like this:

transforms.conf:

[001-route_to_new_index_cluster2]
SOURCE_KEY = MetaData:Sourcetype
DEST_KEY =_MetaData:Index
REGEX = (sourcetype::my:sourcetype1)
FORMAT = my_new_index1

[002-route_to_new_index_cluster2]
SOURCE_KEY = MetaData:Sourcetype
DEST_KEY =_MetaData:Index
REGEX = (sourcetype::my:sourcetype2)
FORMAT = my_new_index2


[003-route_to_cluster2]
REGEX = .
DEST_KEY = _TCP_ROUTING
FORMAT = cluster2  # which is the name from outputs.conf

props.conf:

[my:sourcetype1]
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 003-route_to_cluster2

[my:sourcetype2]
TRANSFORMS-002-route_to_new_index_cluster2 = 002-route_to_new_index_cluster2, 003-route_to_cluster2

Correct?

Thanks!

0 Karma

ehowardl3
Path Finder

@MuS, one thing that has me a little confused - since props.conf calls out the sourcetype and then routes it to cluster2, won't that catch all the data of that sourcetype instead of splitting the data and sending it to cluster2 and to the default group?

0 Karma

MuS
Legend

That's where the defaultGroup = cluster1 in outputs.conf kicks in, it will send ANY data to the target.

But you can also remove the defaultGroup = cluster1 and do something like this:

props.conf

[my:sourcetype1]
TRANSFORMS-000-route_to_cluster = 000-route_to_cluster1
TRANSFORMS-001-route_to_new_index_cluster2 = 001-route_to_new_index_cluster2, 003-route_to_cluster2

[my:sourcetype2]
TRANSFORMS-000-route_to_cluster = 000-route_to_cluster1
TRANSFORMS-001-route_to_new_index_cluster2 = 002-route_to_new_index_cluster2, 003-route_to_cluster2

transforms.conf

 [001-route_to_new_index_cluster2]
 DEST_KEY =_MetaData:Index
 REGEX = .
 FORMAT = my_new_index1

 [002-route_to_new_index_cluster2]
 DEST_KEY =_MetaData:Index
 REGEX = .
 FORMAT = my_new_index2

 [000-route_to_cluster1]
 REGEX = .
 DEST_KEY = _TCP_ROUTING
 FORMAT = cluster1  # which is the name from outputs.conf

 [003-route_to_cluster2]
 REGEX = .
 DEST_KEY = _TCP_ROUTING
 FORMAT = cluster2  # which is the name from outputs.conf

Hope that make sense ...

cheers, MuS

0 Karma

ehowardl3
Path Finder

Perfect. Thanks for the clarification.

0 Karma

MuS
Legend

except the REGEX, just use . because the props.conf already limits the use of the transforms to specific sourcetypes 😉

0 Karma

ehowardl3
Path Finder

Ah yeah, good point. Thanks again!

0 Karma

MuS
Legend

yep, that should work

0 Karma

MuS
Legend

@ehowardl3 is the data split/cloned to both idx clusters or load balanced?

0 Karma

ehowardl3
Path Finder

@MuS, thanks for your time. The data is split/cloned to both idx clusters.

0 Karma

MuS
Legend

please hold, this can be done on the HWF 😉

0 Karma
Get Updates on the Splunk Community!

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...