Hello all,
Forgive my hasty question, it's late and my articulation has dwindled along with my brain capacity...
We need a solution that allows us to not lose any events whatsoever on a collector, AKA heavy forwarder.
We can't do persistent queues across all inouts as splunktcp-ssl inputs do not allow this. We want all inputs to go straight into Splunk, no syslog relays etc.
The index and forward function won't work for us as it counts toward licence usage. How would one check to see what events were not acknowledged anyway? I assume something would have to be hacked into place to check which events were received?
I thought of writing something that monitors and then stops splunkd, copies over another outputs.conf (with no forward servers configured) and starts Splunk which indexes locally and similarly repeats when uplink is back.
I have noticed if all forward servers are removed from outputs.conf, either at start or via CLI one at a time then Splunk automatically starts to index locally on the fly.
This is ideal as it happens on the fly and no event loss I presume? This seems to be the closest solution I could find except that adding forward servers one at a time caused our data to be cloned in triplicate. Ouch!
We don't want to do cloning, hell no, we assume one uplink in each scenario.
We have three receivers (indexers) on the remote side but only one uplink.
I'm lost, how can we get Splunk to index locally ONLY when the uplink is unavailable and hence the event is not ack'ed and then merge those buckets/events out of band at a later stage?
It would be perfect if we could put all not ack'ed events into an index somehow on the localhost after a timeout and then when back online to forward those same events, get them ack'ed and clean out the local index.
This I know how, force a roll of the last hot and then scrub the ids and scp the warm buckets upstream into an indexer and merge and restart.
Thank you.
... View more