In my company there are smaller sub-companies (essentially) which have their own smaller Splunk instances. At the top we have a larger Splunk instance that they are supposed to forward their data to as well via the same UF (yes, double ingesting). The problem is that the index naming standards are strict at the top and we can not use their current index names, so they must be renamed.
I do not see any easy or intuitive way to accomplish this. Unfortunately it can be like pulling teeth to get the smaller subs to do anything to make this easier (standing up their own HFs for instance).
What would be the easiest way to ingest this data to both instances, with differing index names? If there is a props / transforms solution, can it be done on the UF level, or is there something we could implement on our indexers although the data has already been cooked by the UFs?
Thank you.
@splunkadunk5, I had a similar challenge at Can the universal forwarder send the same data to different farms with different index names?
The solution provided by @micahkemp worked like a charm - the UF sent the data to the two farms and one of them at the indexer level modified the index name.
The data is not cooked at UF level, but it's at HF (heavy forwarder) OR indexer level (whichever first Splunk Enterprise the data encounters is the place where event processing happens). I believe there are options available to fork data from UF to multiple Splunk instances (Data cloning link here). You can do this at outputs.conf OR at data input level (look for _TCP_ROUTING in inputs.conf).
Once you've cloned the data from UF to all the instances required, you can override the index the data should go to at heavy forwarder/indexer level using props.conf/transforms.conf. See this for example of that:
https://answers.splunk.com/answers/301504/how-to-override-sourcetype-and-index-assignment.html
I'm under the impression that if you have index=index in inputs.conf at the UF level it cooks the metadata before sending it off to the HF or Indexers (which is why there is a sendCookedData option available). My test environment seems to confirm this, but I admit my testing on that has been relatively limited. We have also used a UF to forward to a Qradar deployment in the past, and this option was mandatory so that the metadata wasn't cooked upon arrival.
It's going to be difficult to get them to change the index assignment on their side (and making them set up HFs so we don't have to cook the metadata beforehand) so I'm looking for another solution. The data cloning will not work because it will carry the same cooked index metadata on the cloned stream.
I've considered using the sendCookedData option to send a non-cooked feed to our syslog server, and then on to our indexers with our own UF, but since it will only send the data over TCP/514 it would be difficult to sort the different data feeds coming in there using syslog-ng.
Hope that makes sense. Thanks for the assistance.
You could still override metadata at HF/Indexer level, when receiving from UF. So basically, fork the cooked data (but not parsed, so it will be parsed at next Splunk Enterprise instance) from UF to your larger and smaller instances (either fork all OR do it at inputs.conf level) , lets the original index assignment as is (on the larger Splunk instance), and have the index assignment overridden by sourcetype or host at those smaller instances. May be try with a sample file and confirm if this architecture works.
I'll have to override it for the larger instance instead of the smaller ones, as the administrators of those environments will likely fight me every step of the way if I force them to change things on their side. But I'll see if I can attempt to override the cooked index at the HF level on my side - might take me awhile to test though.
Thanks for your assistance.