Getting Data In

How to rename index in data sent from another splunk instance?

coreyf311
Path Finder

We are receiving data from an external splunk instance. They have indexes A,B,C. When our indexers receive there data it cannot be indexed because we have indexes D,E,F. How can I rename the index for the incoming data? I am monitoring splunktcp:9998 as all our in house data is sent to the default splunktcp:9997. I have the below in props and transforms on our HF as this data is passing through that box before hitting our indexing tier.

Props

[source::splunktcp:9998]
TRANSFORMS-index = override-index-theirindex

transforms

[override-index-theirindex]
SOURCE_KEY = _MetaData:Index
REGEX = theirindex
FORMAT = myindex
DEST_KEY = _MetaData:Index
0 Karma
1 Solution

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

View solution in original post

0 Karma

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

0 Karma

coreyf311
Path Finder

I implemented the config using [source::splunktcp:port]. per the config in the link. No joy. With my HF set to forward and not index a copy, I get nothing on my indexing tier. With my HF set to store a copy and forward I get messages on my HF for unknown indexes.

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...