Getting Data In

http event data is not received at index

palyogit
New Member

http event data is not received at index

 

though in the log it says HttpInputDataHandler - handled token name=xyz

 

How do i debug this i checked splunkd.log and could not find anything fishy 

 

07-16-2025 16:14:39.809 +0800 DEBUG HttpInputDataHandler - handled token name=embedded, channel=n/a, source_IP=x.y.z.a, reply=0, events_processed=1, http_input_body_size=10338, parsing_err="", body_chunk="{"action": "queued", "workflow_job": {"id": 46075907488, "run_id": 16313804135, "workflow_name": "linux-ci-pipeline", "head_branch": "dts_changes", "run_url": "https://api.github.com/repos/org/repo-name/actions/runs/16313804135", "run_attempt": 1, "node_id": "CR_kwDOHHhjyM8AAAAKulaNoA", "head_sha": "9fd419d2fcd5fc775c4b61a5392133630d5763b8", "url": "https://api.github.com/repos/org/repo-name/actions/job"
07-16-2025 16:14:39.809 +0800 DEBUG UTF8Processor - Done key received for: source::/infrastructure/da_infra/splunk/tarball/splunk_instance/splunk/var/log/splunk/metrics.log|host::baip052|splunkd|2532
07-16-2025 16:14:39.809 +0800 INFO UTF8Processor - Converting using CHARSET="UTF-8" for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|"
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to metrics_log_clone::s
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted metrics_log_clone::s
07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - Using truncation length 10000 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|"
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to _metrics
07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - LB_CHUNK_BREAKER uses truncation length 2000000 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|"
07-16-2025 16:14:39.809 +0800 INFO LineBreakingProcessor - Using lookbehind 100 for conf "source::http:embedded|host::10.244.215.89:8088|httpevent|"
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted _metrics
07-16-2025 16:14:39.809 +0800 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 10338 - data_source="http:embedded", data_host="10.244.215.89:8088", data_sourcetype="httpevent"
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to group::pipeline
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted group::pipeline
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to name::dev-null
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Extracted name::dev-null
07-16-2025 16:14:39.809 +0800 DEBUG regexExtractionProcessor - RegexExtractor: Interpolated to processor::nullqueue
07-16-2025 16:14:39.809 +0800 DEBUG UTF8Processor - Done key received for: source::http:embedded|host::10.244.215.89:8088|httpevent|

Labels (1)
0 Karma

palyogit
New Member

Thanks everyone for your response. The issue was due to DATETIME_CONFIG setting in  props.conf .It was set to custom value which was causing packets to drop. setting it DATETIME_CONFIG = NONE helped resolve the issue 

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @palyogit 

Looking at this I think there are two issues. Im not entirely sure they are related as others have suggested, because you would usually expect an event to be dropped if it hits the TRUNCATE limit, you would just be left with the first 10,000 characters.

The first thing to do is increase that 10000 limit - are you expecting the events to be this large?

# props.conf #
[httpevent]
# Increase to a number bigger than the events which are being truncated.
TRUNCATE=50000 

The other log line which caught my eye is:

RegexExtractor: Interpolated to processor::nullqueue

especially because you are missing the events entirely. Do you have any props which are setting the nullqueue? Please can you do a btool and share hte output?

$SPLUNK_HOME/bin/splunk cmd btool props list --debug httpevent

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

PrewinThomas
Motivator

@palyogit 

Two main things are highlighting in the log

WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded...
regexExtractionProcessor - Interpolated to processor::nullqueue

Looks like your truncating limit is hitting and discarding the event.

Increase TRUNCATE Limit in props.conf and test again.

Eg:
[httpevent]
TRUNCATE = 20000

Also you can refer below,

#https://help.splunk.com/en/data-management/collect-http-event-data/use-hec-in-splunk-enterprise/http...

Regards,
Prewin
Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!

0 Karma

kiran_panchavat
Champion

@palyogit 

Check this documentation and try to send an sample events to HEC. 

https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.4/get-data-with-http-event-co... 

https://help.splunk.com/en/splunk-enterprise/get-started/get-data-in/9.4/get-data-with-http-event-co... 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma

kiran_panchavat
Champion

@palyogit 

Ensure that your HEC input includes valid index= . Missing or mis-typed values cause Splunk to drop data.

HttpInputDataHandler - handled token name=embedded … events_processed=1 … Truncating line because limit of 10000 bytes …

it means Splunk HEC received the event, parsed it, but truncated the line at ~10 kB, which likely leads to it being dropped  before indexing 

Did this help? If yes, please consider giving kudos, marking it as the solution, or commenting for clarification — your feedback keeps the community going!
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Splunk AI Assistant for SPL vs. ChatGPT: Which One is Better?

In the age of AI, every tool promises to make our lives easier. From summarizing content to writing code, ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...