Hello,
Where can I find the default values of the following parameters in streamfwd.xml, and what's the reasonable range of values?
ProcessingThreads, MaxPacketQueueSize, MaxTcpSessionCount, MaxTcpReassemblyPacketCount
And one more question. Is there any recommended set of values for above parameters to collect packets with 1Gbps without problems?
Thank you.
Hello,
I updated the documentation to specify the default values for the parameters you mentioned - http://docs.splunk.com/Documentation/StreamApp/6.3.2/DeployStreamApp/ConfigureStreamForwarder#Advanc...
Stream should be able to handle 1Gbps of non-SSL traffic with the default parameters, although it depends on the traffic shape. I'd recommend starting with defaults and adjusting if runtime errors such as packet queue overflow, etc. occur.
Hello,
I updated the documentation to specify the default values for the parameters you mentioned - http://docs.splunk.com/Documentation/StreamApp/6.3.2/DeployStreamApp/ConfigureStreamForwarder#Advanc...
Stream should be able to handle 1Gbps of non-SSL traffic with the default parameters, although it depends on the traffic shape. I'd recommend starting with defaults and adjusting if runtime errors such as packet queue overflow, etc. occur.
Here is some additional guidance on when to change each of these parameters:
Please note that those last two conditions are almost always caused by data feed problems, such as a SPAN port that is configured to only send ingress packets, or a large number of packets being dropped (say, you are trying to send a 2 Gbps stream to a 1 Gb NIC). If you increase them and it only results in a delay in how long it takes to see the errors (and higher memory usage), this is likely the root cause.
So, 99.9% of the time the only parameter you should ever have to change is ProcessingThreads.
In case of using a dedicated server for running streamfwd, can I think that the proper value of ProcessingThreads would be same with the number of CPU core?
Processing threads count should be set based on traffic load; there's no benefit of having 32 threads (not uncommon for a number of CPU cores these days) for processing ~1Gbps; in fact, there's some (not significant) memory overhead caused by running multiple processing threads. I believe for most use cases there's no need to set more than 8 processing threads.
In addition to memory, there is also marginal CPU overhead for having more ProcessingThreads than is necessary.
So what size should I increase MaxTcpSessionCount too? And what are the ramifications of increasing it? Should I be increasing it 10K blocks until the messages go away?
Thanks
This is a sanity check, so setting it really high won't hurt anything unless it actually tries to go that high. You could try something like 100k or more and watch it. If it seems to settle around specific number (say 30k), then maybe lower it to 2x or so.
Note that this threshold getting exceeded can be a sign of other problems, in which case it will just grow to the max no matter how high you set it. At a certain point, it would likely just run out of memory. Common causes may be stream only getting all ingress packets or all egress packets (but not both), or lots of dropped packets causing reassembly to fail.
Great. Thank you.