To answer your question, our data comes from three different sources: - Universal Forwarders (Splunk2Splunk) - HEC - Modular Inputs Data integrity is critical for us, so your point about catching the data "on the way in" really resonates. That's exactly why we are now leaning towards Ingest Processor with a SPL2 `branch` pipeline** rather than Ingest Actions + S3 — the async nature of S3 introduces a layer we'd rather avoid when data loss is not an option. Given our mixed ingestion sources (UF, HEC and Modular Inputs), do you know if Ingest Processor handles all three transparently, or are there any known limitations depending on the input type? Also, for the second Splunk Cloud destination, we were planning to use a single HEC token with all target indexes whitelisted in the `indexes` field, and let the pipeline handle routing via `eval index=...`. Does that approach sound right to you? Thanks again for pointing us in the right direction!
... View more