The Stream config is HTTP and no Aggregation and has src, dest, content, time_taken fields..
The packet queue size chart of Stream Forwarder Metrics dashboard has accumulated to a large value and Independent Stream Forwarder has out-of-memory of streamfwd process.
What is the packet queuing of packet queue size chart?
How does Stream App analyze for TCP flows of a TCP session?
How about packet loss, long response delay, abnormal tcp session close and so on?
Is there the Stream App not analyze which packet flow type?
How much traffic are you trying to capture, what are your system specs, and how many ProcessorThreads do you have configured? Typically, this is caused by not assigning enough threads (corresponding to cpu cores) to process the packets coming in.
Independent Stream Forwarder : 20 core, 64GB mem, Centos 7.1, Splunk 6.5.5, Stream 7.1.1
Traffic : (Avg) 2Gbps
processingThreads = 8
maxEventQueueSize = 1000000000
maxPacketQueueSize = 268435456
maxTcpReassemblyPacketCount = 500000000
tcpConnectionTimeout = 120
maxEventAttributes = 2000
The packets come from a aggregator that is merging and filtering it.
I am wondering why packets in the packet queue was delayed.
Thanks to your support.
Wow, those are really big numbers! I would definitely not recommend setting those so big. It's hard to say exactly what the issue is withougt digging further into logs or pcaps (best to do that with your SE, if necessary), but here are two possibilities:
I am wondering the tcpConnectionTimeout option.
If the tcpConnectionTimeout value changes to 10, doesn’t the number of event increase higher than before?
I think if there are packets after 10 second in a tcp session, it will happen new events with time_taken and byte_in value is 0 due to no a request packet.
Is it right?
If so (tcpConnectionTimeout 10), does the fragmented packets disappear in the packet queue?
Except of system performance, Could it(not reassembled) accumulated in packet queue?
Thanks to your reply.