Splunk Search

Intermittent log data ingestion with Packetbeat JSON file

s_s
Observer

Hello, 

I am experiencing intermittent log ingestion issues on some servers and have observed potential queue saturation in the process. Below are the details of the issue and the related observations:

  1. Setup Overview:
    I am using Packetbeat to capture DNS queries across multiple servers.
    Packetbeat generates JSON log files, rotating logs into 10 files, each with a maximum size of 50 MB. Packetbeat generates 3-4 JSON files every minute
    Setup -> Splunk Cloud 9.2.2 , On-Prem Heavy Forwarder 9.1.2 , and Universal Forwarder 9.1.2

    Example list of Packetbeat log files (rotated by Packetbeat):
    packetbeat.json
    packetbeat.1.json
    packetbeat.2.json
    packetbeat.3.json
    ...
    packetbeat.9.json

  2. Issue Observed:
    On some servers, the logs are ingested and monitored consistently by the Splunk agent, functioning as expected.
    However, on other servers:
    Logs are ingested for a few minutes, followed by a 5–6-minute gap.
    This cycle repeats,  resulting in missing data in between, while other data collected from the same server ingesting correctly. 
    Intermittent_data_ingestion.png
  3. Additional Observations:
    While investigating the issue, I observed the following log entry in the Splunk Universal Forwarder _internal index:

 

 

 

11-15-2024 17:27:35.615 -0600 INFO HealthChangeReporter - feature="Real-time Reader-0" indicator="data_out_rate" previous_color=yellow color=red due_to_threshold_value=2 measured_value=2 reason="The monitor input cannot produce data because splunkd's processing queues are full. This will be caused by inadequate indexing or forwarding rate, or a sudden burst of incoming data."
host = EAA-DC
index = _internal
source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\health.log
sourcetype = splunkd​

 

 

 

  • The following conf applied to all DNS servers:

limits.conf

 

 

 

[thruput]
maxKBps = 0

 

 

 

server.conf

 

 

 

[queue]
maxSize = 512MB

 

 

 

inputs.conf

 

 

 

[monitor://C:\packetbeat.json]
disabled = false
index = dns
sourcetype = packetbeat

 

 

 

 

 

 

Any direction to resolve this is appreciated! Thank you!

Tags (2)
0 Karma

sainag_splunk
Splunk Employee
Splunk Employee

@s_s Hello, checkout the queues on the hwf pipleine, and also see if you can apply  async forwarding.

https://www.linkedin.com/pulse/splunk-asynchronous-forwarding-lightning-fast-data-ingestor-rawat









If this Helps, Please Upvote.

If this helps, Upvote!!!!
Together we make the Splunk Community stronger 
0 Karma
Get Updates on the Splunk Community!

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...