Hello, Team! I see delays in the receipt of events in the indexes. Events are collected by SplunkForwarder agents. In the case of a complete absence of events, restarting agents helps, but if there is a delay in the arrival of events, restarting agents does not help. Events goes to HFs, then to indexers. On splunk universal forwarders such errors in splunkd.log WARN TailReader [282099 tailreader0] - Could not send data to output queue (parsingQueue), retrying... in metrics.log +0300 INFO HealthChangeReporter - feature="Large and Archive File Reader-0" indicator="data_out_rate" previous_color=green color=yellow due_to_threshold_value=1 measured_value=1 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data." where is problem? on splunk universal forwarder or on heavy forwarder? what to look?
... View more