All Apps and Add-ons

Splunk App for Stream: Trying to add data from a Pcap file, we are we getting "ERROR stream.CaptureServer - Event queue overflow"?

Lindaiyu
Path Finder

Hello Splunkers,

Can you help me out please, I am having a problem with app for stream when trying to add data from a Pcap file.

Each time I launch the following command to input the data:

 ./streamfwd -r /mnt/sdb1/Hcaptures/s0.pcap -s http://localhost:8889

I get the following errors on CLI:

14:26:04.665 ERROR stream.SnifferReactor - Dropped 1 TCP session(s) due to session limit reached
14:26:06.946 INFO  stream.StreamSender - (#1) Connection established to 127.0.0.1:8889
14:26:06.962 INFO  stream.StreamSender - (#2) Connection established to 127.0.0.1:8889
14:26:06.976 INFO  stream.StreamSender - (#3) Connection established to 127.0.0.1:8889
14:26:06.990 INFO  stream.StreamSender - (#4) Connection established to 127.0.0.1:8889
14:26:07.003 INFO  stream.StreamSender - (#5) Connection established to 127.0.0.1:8889
14:26:07.089 ERROR stream.CaptureServer - Event queue overflow; dropping 10000 events
14:26:07.300 ERROR stream.CaptureServer - Event queue overflow; dropping 10000 events
14:26:07.326 INFO  stream.StreamSender - Successfully pinged server (config up to date): f76495fc-ded1-4f7d-993e-1271f2511f7c
14:26:07.448 ERROR stream.CaptureServer - Event queue overflow; dropping 10000 events
14:26:07.604 INFO  stream.StreamSender - (#6) Connection established to 127.0.0.1:8889
14:26:07.628 INFO  stream.StreamSender - (#7) Connection established to 127.0.0.1:8889
14:26:07.647 INFO  stream.StreamSender - (#8) Connection established to 127.0.0.1:8889
14:26:07.662 INFO  stream.StreamSender - (#9) Connection established to 127.0.0.1:8889
14:26:07.714 ERROR stream.CaptureServer - Event queue overflow; dropping 10000 events
14:26:07.972 ERROR stream.CaptureServer - Event queue overflow; dropping 10000 events
14:26:08.187 ERROR stream.CaptureServer - Event queue overflow; dropping 10000 events
14:26:08.493 ERROR stream.CaptureServer - Event queue overflow; dropping 10000 events

Is this due to CPU usage/Disk IO or something like that ?

Or is this coming from the app itself? Either way could someone please give me a solution!

Thanks a lot!
Jessica

Tags (3)
1 Solution

mdickey_splunk
Splunk Employee
Splunk Employee

I assume your pcap file is fairly large? What is probably happening here is that the thread reading the file is much faster than the one sending the events to splunkd, so the queue in between is getting overwhelmed. You could try increasing <MaxEventQueueSize> in streamfwd.xml but most likely this would just use more memory without fixing the problem. Instead, try setting the bitrate for how fast the pcap file is read using the "-b" command line parameter:

./streamfwd -b 10000000 -r /mnt/sdb1/Hcaptures/s0.pcap -s http://localhost:8889

This will limit the rate of the reader thread to about 10Mbps, which should be slow enough for the sender to keep up. The default bitrate is currently unlimited, and it probably shouldn't be (I just filed a bug to change this).

View solution in original post

mdickey_splunk
Splunk Employee
Splunk Employee

Just for historical purposes in case someone is searching for this error.. you will most commonly encounter "Event queue overflow" errors coming from the modular input process when it is running inside a Universal Forwarder that is unable to send data to your indexers fast enough. Most often this is caused by not modifying the default limits.conf settings, which restricts the data rate to 256 kbps. You can fix this by adding the following to your limits.conf:

[thruput]
maxKBps = 0

Please see Splunk Components Requirements in the Stream documentation for more information.

0 Karma

mdickey_splunk
Splunk Employee
Splunk Employee

I assume your pcap file is fairly large? What is probably happening here is that the thread reading the file is much faster than the one sending the events to splunkd, so the queue in between is getting overwhelmed. You could try increasing <MaxEventQueueSize> in streamfwd.xml but most likely this would just use more memory without fixing the problem. Instead, try setting the bitrate for how fast the pcap file is read using the "-b" command line parameter:

./streamfwd -b 10000000 -r /mnt/sdb1/Hcaptures/s0.pcap -s http://localhost:8889

This will limit the rate of the reader thread to about 10Mbps, which should be slow enough for the sender to keep up. The default bitrate is currently unlimited, and it probably shouldn't be (I just filed a bug to change this).

mdickey_splunk
Splunk Employee
Splunk Employee

BTW "-s http://localhost:8889" is implied. You only need to specify "-s" in the command line if you are changed the defaults.

0 Karma

Lindaiyu
Path Finder

Hello
Thank you very much for your answer and it works.
I think that it because the rate of sending to Splunk is slower than the rate of reading. And I checked the limits.conf and it's ok.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...