Getting Data In

How to rename index in data sent from another splunk instance?

coreyf311
Path Finder

We are receiving data from an external splunk instance. They have indexes A,B,C. When our indexers receive there data it cannot be indexed because we have indexes D,E,F. How can I rename the index for the incoming data? I am monitoring splunktcp:9998 as all our in house data is sent to the default splunktcp:9997. I have the below in props and transforms on our HF as this data is passing through that box before hitting our indexing tier.

Props

[source::splunktcp:9998]
TRANSFORMS-index = override-index-theirindex

transforms

[override-index-theirindex]
SOURCE_KEY = _MetaData:Index
REGEX = theirindex
FORMAT = myindex
DEST_KEY = _MetaData:Index
0 Karma
1 Solution

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

View solution in original post

0 Karma

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

0 Karma

coreyf311
Path Finder

I implemented the config using [source::splunktcp:port]. per the config in the link. No joy. With my HF set to forward and not index a copy, I get nothing on my indexing tier. With my HF set to store a copy and forward I get messages on my HF for unknown indexes.

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud’s AI Assistant in Action Series: Analyzing and ...

This is the second post in our Splunk Observability Cloud’s AI Assistant in Action series, in which we look at ...

Elevate Your Organization with Splunk’s Next Platform Evolution

 Thursday, July 10, 2025  |  11AM PDT / 2PM EDT Whether you're managing complex deployments or looking to ...

Splunk Answers Content Calendar, June Edition

Get ready for this week’s post dedicated to Splunk Dashboards! We're celebrating the power of community by ...