There was 7 messages of this kind during 10 minute interval.
I found that "reply=9" means "server is busy" - this is a message for log source "stop sending data", because that HF is overloaded (log source really stopped sending data).
At the same time parsing, aggregation, typing, httpinput and splunktcpin queues had 100% fill ratio, indexing queue has 0% fill ratio.
At the same time, VMWare host on which HF is running, was probably overloaded - CPU frequency on this host in usually about 1GHz, but grew up to 4GHz shortly for this time (it was not caused by Splunk HF probably).
At the same time, there is no ERROR messages in splunkd.log on IDX cluster, which is receiving data from concerned HF.
Based on this information, I came to the following conclusion:
Because the indexqueue on the HF was not full and there were no ERRORs on the IDX cluster, there was no problem on the IDX cluster or on the network between the HF and the IDX cluster.
Due to VMWare host overload, the HF did not have sufficient resources to process messages, so the parsing, aggregation, and typing queues became full.
As a result, the following occurred:
to populate the httpinput and splunktcpin queues
to generate ERROR error HttpInputDataHandler - Failed processing http input
stop receiving data from the log source
As soon as the VMWare host overload ended (after cca 10 minutes), data reception was resumed, no data was lost.
Could you please review my conclusion and tell, if I am right? Or there is something more to investigate?
And what to do to avoid this problem in future? Re-configure queue setting (set higher max_size_kb)? Or add some power to VMWare host? Or something else?