Getting Data In

Why am I getting this error "HttpInputDataHandler - Failed processing http input"in splunkd.log on heavy forwarder?

Path Finder


I am solving following problem:

HEC on HF is used for data receiving. In splunkd.log on Heavy Forwarder I found these error:


ERROR HttpInputDataHandler - Failed processing http input, token name=linux_rh, channel=n/a, source_IP=, reply=9, events_processed=18, http_input_body_size=8405, parsing_err="Server is busy"


There was 7 messages of this kind during 10 minute interval.

I found that "reply=9" means "server is busy" - this is a message for log source "stop sending data", because that HF is overloaded (log source really stopped sending data).

At the same time parsing, aggregation, typing, httpinput and splunktcpin queues had 100% fill ratio, indexing queue has 0% fill ratio.

At the same time, VMWare host on which HF is running, was probably overloaded - CPU frequency on this host in usually about 1GHz, but grew up to 4GHz shortly for this time (it was not caused by Splunk HF probably).

At the same time, there is no ERROR messages in splunkd.log on IDX cluster, which is receiving data from concerned HF.

Based on this information, I came to the following conclusion:

  • Because the indexqueue on the HF was not full and there were no ERRORs on the IDX cluster, there was no problem on the IDX cluster or on the network between the HF and the IDX cluster.
  • Due to VMWare host overload, the HF did not have sufficient resources to process messages, so the parsing, aggregation, and typing queues became full.
  • As a result, the following occurred:
    • to populate the httpinput and splunktcpin queues
    • to generate ERROR error HttpInputDataHandler - Failed processing http input
    • stop receiving data from the log source

As soon as the VMWare host overload ended (after cca 10 minutes), data reception was resumed, no data was lost.

Could you please review my conclusion and tell, if I am right? Or there is something more to investigate?

And what to do to avoid this problem in future? Re-configure queue setting (set higher max_size_kb)? Or add some power to VMWare host? Or something else?

Thank you very much in advance for any input.

Best regards

Lukas Mecir 

Labels (2)
Tags (1)
Get Updates on the Splunk Community!

Splunk Observability Cloud | Unified Identity - Now Available for Existing Splunk ...

Raise your hand if you’ve already forgotten your username or password when logging into an account. (We can’t ...

Index This | How many sides does a circle have?

February 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Registration for Splunk University is Now Open!

Are you ready for an adventure in learning?   Brace yourselves because Splunk University is back, and it's ...