Hi Splunkies,
is there a way to set up log event event queuing and the chunking of queued events on the forwarder side?
Our problem is that our forwarders flood our indexer with events when it is back online after an outage due to maintenance or other reasons and some of those events are not indexed and get lost.
The fowarders are configured to use acknowledgement and SSL to encrypt the traffic between forwarders and indexers. The use of SSL and acknowledgement is required by the orgranizations data management and securicy policies.
Utilization on the indexer is quite low. CPU ist always <10% even after bringing them up online after maintainance.
Any suggestions or ideas, like a configuration to send queued events in chunks of like 10 Mb and how to do that?
René
Forwarders automatically queue data when they can't reach an indexer. Usually, that queue is enough to hold events until an indexer is available, but it may not be enough if all are indexers are down for a prolonged period or if a lot of events are generated during the outage.
The maxQueueSize
setting in outputs.conf may help. Increasing the value from the default of 500KB should help.
If you have enough resources, consider standing up a second indexer so you're more likely to have one available at all times. It'll help with search performance, too.