helo esix,
Thanks you for your reply, I answer point by point:
1)
Yes, you are right. The thing is that the HF dont do any parsing at all, but if I not installed the addons, the data not have parsed (Instead of installing the addon on de Indexers and the searchhead), so when we have installed the addons on the heavy fowarders, its OK. So, this is ver weird maybe I need to check the configuration on HF but like I said to nittala the index and foward parameter is set to false...
2)
I have tested the bandwitch and its ok, no problem of saturation, etc.
The indexer queues are fine. I have checked.
Regarding to this phrase:
After that, you need to understand that if Splunk cant send TCPOut (indexing queue..) It will hold the data and keep trying. Once that queue is filled, it back pressures against the Typing, then the Aggregation, and then the Parsing Queues. So if you really are not doing any filtering on your Heavies, I would say you're network connectivity would be one of the primary areas I would look.
I am agree with you. because I have this messages on heavy fowarder:
07-17-2018 11:33:58.136 -0300 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 100 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
07-17-2018 11:35:38.963 -0300 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 200 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
07-17-2018 11:38:12.260 -0300 WARN TcpOutputProc - Tcpout Processor: The TCP output processor has paused the data flow. Forwarding to output group default-autolb-group has been blocked for 100 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data.
... View more