All Apps and Add-ons

Splunk App for Stream: How to resolve congestion in parsing queue after starting the universal forwarder?

Splunk Employee
Splunk Employee

Hello Stream experts,

I'm doing a stress test with the streamfwd by capturing many short-live TCP traffics over 35000 cps.
Splunk App for stream is running on a universal forwarder and sending the captured data to a remote indexer.
I configured it as follows:
- Splunk 6.3 in both sides
- Stream app version is 6.3.2
- maxKBps = 0 on UF side
- increased all queue size of UF including exec and parsingQueue to over 30MB
- applied the recommended tcp parameters in UF and Indexer node.
- set streamfwd.xml ( ProcessingThreads : 8, PcapBufferSize : 127108864, TcpConnectionTimeout : 1, MaxEventQueueSize : 500000 )

As you can see from the chart in the attached image, the parsingQueue piles up very quickly very after starting the UF. The exec queue problem follows it immediately and the stream's event queue also piled up.

Strangely, I couldn't find any congestion from indexer queues at that moment.

What should I check to resolve this problem?
Thank you in advance.

alt text

0 Karma

Splunk Employee
Splunk Employee

My problem was caused by "compressed=true" option in outputs.conf. If there is a heavy load (like performance test scenario), the parsingQueue of UF was easily filled up with this option.
My problem was resolved by setting it to false and having bigger maxQueueSize (10MB in my case) at outputs.conf.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!