Getting Data In

How to rename index in data sent from another splunk instance?

coreyf311
Path Finder

We are receiving data from an external splunk instance. They have indexes A,B,C. When our indexers receive there data it cannot be indexed because we have indexes D,E,F. How can I rename the index for the incoming data? I am monitoring splunktcp:9998 as all our in house data is sent to the default splunktcp:9997. I have the below in props and transforms on our HF as this data is passing through that box before hitting our indexing tier.

Props

[source::splunktcp:9998]
TRANSFORMS-index = override-index-theirindex

transforms

[override-index-theirindex]
SOURCE_KEY = _MetaData:Index
REGEX = theirindex
FORMAT = myindex
DEST_KEY = _MetaData:Index
0 Karma
1 Solution

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

View solution in original post

0 Karma

somesoni2
Revered Legend

Data coming to your Splunk instance is already parsed/cooked (assuming other Splunk instance is runningSplunk Enterprise version), hence it'll be parsed again at your heavy forwarder (transforms is not applied). The workaround you can do is that you can parse the cooked data again. You can follow the configuration from following post:
https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

Please note that with this setting, all data coming to your heavy forwarder will be parsed again (can't be set for a sourcetype). The parsing will include line breaking and timestamp extraction, so you'd need to have all sourcetype definitions as the other Splunk instance so that event parsing is done correctly.

0 Karma

coreyf311
Path Finder

I implemented the config using [source::splunktcp:port]. per the config in the link. No joy. With my HF set to forward and not index a copy, I get nothing on my indexing tier. With my HF set to store a copy and forward I get messages on my HF for unknown indexes.

0 Karma
Get Updates on the Splunk Community!

Dashboards: Hiding charts while search is being executed and other uses for tokens

There are a couple of features of SimpleXML / Classic dashboards that can be used to enhance the user ...

Splunk Observability Cloud's AI Assistant in Action Series: Explaining Metrics and ...

This is the fourth post in the Splunk Observability Cloud’s AI Assistant in Action series that digs into how ...

Brains, Bytes, and Boston: Learn from the Best at .conf25

When you think of Boston, you might picture colonial charm, world-class universities, or even the crack of a ...