I have a customer who wants to do a performance test using stream. He has 2 scenarios:
First one is about collecting UDP packets without drops. UDP packets will be collected from 2ea x 1g NICs with 2gbps in total. The fields that need to be extracted are L4 attributes including srcip, srcport, destip, destport and timestamp. The measurement would be the number of droped packets so we need to minimize the drops.
The other scenario is collecting TCPpackets. TCP packets will be generated with the speed of 35,000 connections/seconds. Also, he will check the number of the lost packet.
Is there any guide documentation for tuning parameters of OS kernel and streamfwd for doing these kind of tests?
Thank you in advance.
Stream documentation has recommendations for setting the linux kernel parameters - http://docs.splunk.com/Documentation/StreamApp/6.3.2/DeployStreamApp/Deploymentrequirements#Linux
As for configuring the Stream Forwarder, I'd recommend increasing the number of processing threads to ~3-7 by setting the ProcessingThreads parameter in streamfwd.xml file (see docs for more details - http://docs.splunk.com/Documentation/StreamApp/6.3.2/DeployStreamApp/ConfigureStreamForwarder)
Other streamfwd.xml settings that you may need to configure:
PcapBufferSize: set to 67108864 (64MB) or more
MaxTcpSessionCount: may need to increase if the generated traffic load has more than 50000 concurrent sessions
TcpConnectionTimeout: set to a lower value to ~30 (seconds)
Thank you vshcherbakov.
What do you think about "net.ipv4.udp_mem" for UDP?
We haven't had a need to tune the net.ipv4.udp_mem parameter. However, there's obviously enough variance in OS versions/traffic load profile/etc. to potentially make your customer's test setup different enough from our internal to warrant different recommendations.
I'd recommend running the test with the settings specified in Stream's doc first to see if there's any need for further kernel settings tuning.
I just noticed that you're planning to capture 2Gbps on 2x1Gbps NICs. One thing I'd watch carefully in this case is SPAN/TAP/switch drop rate to make sure you're not overloading the NIC bandwidth with the test traffic.
Having poor quality data feed (missing packets) may cause extra Stream processing overhead (excessive memory and CPU usage) as well as poor event data quality (garbage in/garbage out). TCP traffic is more sensitive to the quality of the data feed, but it applies to UDP as well.