Getting Data In

Why am I getting this error "HttpInputDataHandler - Failed processing http input"in splunkd.log on heavy forwarder?

lukasmecir
Path Finder

Hello,

I am solving following problem:

HEC on HF is used for data receiving. In splunkd.log on Heavy Forwarder I found these error:

 

ERROR HttpInputDataHandler - Failed processing http input, token name=linux_rh, channel=n/a, source_IP=10.177.155.14, reply=9, events_processed=18, http_input_body_size=8405, parsing_err="Server is busy"

 

There was 7 messages of this kind during 10 minute interval.

I found that "reply=9" means "server is busy" - this is a message for log source "stop sending data", because that HF is overloaded (log source really stopped sending data).

At the same time parsing, aggregation, typing, httpinput and splunktcpin queues had 100% fill ratio, indexing queue has 0% fill ratio.

At the same time, VMWare host on which HF is running, was probably overloaded - CPU frequency on this host in usually about 1GHz, but grew up to 4GHz shortly for this time (it was not caused by Splunk HF probably).

At the same time, there is no ERROR messages in splunkd.log on IDX cluster, which is receiving data from concerned HF.

Based on this information, I came to the following conclusion:

  • Because the indexqueue on the HF was not full and there were no ERRORs on the IDX cluster, there was no problem on the IDX cluster or on the network between the HF and the IDX cluster.
  • Due to VMWare host overload, the HF did not have sufficient resources to process messages, so the parsing, aggregation, and typing queues became full.
  • As a result, the following occurred:
    • to populate the httpinput and splunktcpin queues
    • to generate ERROR error HttpInputDataHandler - Failed processing http input
    • stop receiving data from the log source

As soon as the VMWare host overload ended (after cca 10 minutes), data reception was resumed, no data was lost.

Could you please review my conclusion and tell, if I am right? Or there is something more to investigate?

And what to do to avoid this problem in future? Re-configure queue setting (set higher max_size_kb)? Or add some power to VMWare host? Or something else?

Thank you very much in advance for any input.

Best regards

Lukas Mecir 

Labels (2)
Tags (1)
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...