Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.
Audit event generator: Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.
Root Cause: More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct.
Last 50 related messages:
05-07-2018 13:30:34.005 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group local_55153 has been blocked for 1580 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
05-07-2018 13:30:24.089 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group local_55153 has been blocked for 1570 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
05-07-2018 13:30:14.070 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group local_55153 has been blocked for 1560 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
05-07-2018 13:30:04.056 -0400 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group local_55153 has been blocked for 1550 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
TailReader-0
Root Cause: The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data.
Last 50 related messages:
05-07-2018 13:04:20.241 -0400 WARN TailReader - Could not send data to output queue (parsingQueue), retrying...
05-07-2018 13:04:14.444 -0400 INFO TailReader - Starting batchreader0 thread
05-07-2018 13:04:14.444 -0400 INFO TailReader - Registering metrics callback for: batchreader0
05-07-2018 13:04:14.442 -0400 INFO TailReader - Starting tailreader0 thread
05-07-2018 13:04:14.442 -0400 INFO TailReader - Registering metrics callback for: tailreader0
... View more