Getting Data In

How to rename index in data sent from another splunk instance?

coreyf311
Path Finder

We are receiving data from an external splunk instance. They have indexes A,B,C. When our indexers receive there data it cannot be indexed because we have indexes D,E,F. How can I rename the index for the incoming data? I am monitoring splunktcp:9998 as all our in house data is sent to the default splunktcp:9997. I have the below in props and transforms on our HF as this data is passing through that box before hitting our indexing tier.

Props

[source::splunktcp:9998]
TRANSFORMS-index = override-index-theirindex

transforms

[override-index-theirindex]
SOURCE_KEY = _MetaData:Index
REGEX = theirindex
FORMAT = myindex
DEST_KEY = _MetaData:Index
0 Karma
1 Solution

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

View solution in original post

0 Karma

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

0 Karma

coreyf311
Path Finder

I implemented the config using [source::splunktcp:port]. per the config in the link. No joy. With my HF set to forward and not index a copy, I get nothing on my indexing tier. With my HF set to store a copy and forward I get messages on my HF for unknown indexes.

0 Karma
Get Updates on the Splunk Community!

Application management with Targeted Application Install for Victoria Experience

  Experience a new era of flexibility in managing your Splunk Cloud Platform apps! With Targeted Application ...

Index This | What goes up and never comes down?

January 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Splunkers, Pack Your Bags: Why Cisco Live EMEA is Your Next Big Destination

The Power of Two: Splunk + Cisco at "Ludicrous Scale"   You know Splunk. You know Cisco. But have you seen ...