I'm working on an environment with a mature clustered Splunk instance. The client wishes to start dual-forwarding to a new replacement environment which is a separate legal entity (they understand the imperfections of dual-forwarding and possible data loss etc.)
They need to rename the destination indexes in the new environment dropping a prefix we can call 'ABC', I believe the easiest way is to approach this via INGEST_EVAL on the new Indexes. There are approx 20x indexes to rename example:
transforms.conf (located on the NEW Indexers)
[index_remap_A]
INGEST_EVAL = index="value"
I have read the spec file in transforms.conf for 9.3.1 and a 2020 .conf presentation but I am unable to find great examples. Has anyone taken this approach? as it is only a low volume of remaps it may be best to statically approach this.
UF is sending the data as just cooked. HF is sending it as cooked and parsed.
The issue is not with INDEX_EVAL.
The thing is that on the indexer tier - as the data has already been processed by the HF no props are fired. The events are immediately routed to indexing pipeline completely skipping the preceeding steps.
You could try to fiddle with the routing on the splunktcp input so that it gets routed to typingQueue but that can break other things.
There are two possible issues.
1. Are you forwarding to two destinations from the originating UF or from an intermediate HF? In the latter case the data is forwarded as parsed so it's not processed again. (that can be AFAIR changed but it's tricky)
2. Since props are based on sourcetype/source/host, you can't just rewrite one index to another globally. You need to do it selectively on a - for example - per-sourcetype basis (possibly with some conditional execution) or define a wildcard-based global stanzas to conditionally rewrite destination indexes. Kinda ugly and might be troublesome to maintain.
Thanks @PickleRick .
HF: There is an HF in the way and so yes it is cooking the data. Hence my intention to perform an INGEST_EVAL on the IDX tier of the new instance to remap that Meta at the point of indexing. I understand that this is viable and a useful getaround the cooked issue.
If this is viable then it minimises changes to the forwarder tier which is desirable for stability. This was one of the sources recommend to me:
UF is sending the data as just cooked. HF is sending it as cooked and parsed.
The issue is not with INDEX_EVAL.
The thing is that on the indexer tier - as the data has already been processed by the HF no props are fired. The events are immediately routed to indexing pipeline completely skipping the preceeding steps.
You could try to fiddle with the routing on the splunktcp input so that it gets routed to typingQueue but that can break other things.
Thanks, as I suspected this strikes me as fraught with challenge and unable to fully lab replicate increasing the risk of outage or lost data during dual forward. I think we will need to maintain the index names to maintain ease of migration sadly, it isn't wrong just not ideal for a clean new environment.