All Apps and Add-ons

Splunk App for Stream: Where can I find the default values for the following parameters in streamfwd.xml? Is there a recommended range of values?

kwchang_splunk
Splunk Employee
Splunk Employee

Hello,

Where can I find the default values of the following parameters in streamfwd.xml, and what's the reasonable range of values?

ProcessingThreads, MaxPacketQueueSize, MaxTcpSessionCount, MaxTcpReassemblyPacketCount

And one more question. Is there any recommended set of values for above parameters to collect packets with 1Gbps without problems?

Thank you.

0 Karma
1 Solution

vshcherbakov_sp
Splunk Employee
Splunk Employee

Hello,

I updated the documentation to specify the default values for the parameters you mentioned - http://docs.splunk.com/Documentation/StreamApp/6.3.2/DeployStreamApp/ConfigureStreamForwarder#Advanc...

Stream should be able to handle 1Gbps of non-SSL traffic with the default parameters, although it depends on the traffic shape. I'd recommend starting with defaults and adjusting if runtime errors such as packet queue overflow, etc. occur.

View solution in original post

vshcherbakov_sp
Splunk Employee
Splunk Employee

Hello,

I updated the documentation to specify the default values for the parameters you mentioned - http://docs.splunk.com/Documentation/StreamApp/6.3.2/DeployStreamApp/ConfigureStreamForwarder#Advanc...

Stream should be able to handle 1Gbps of non-SSL traffic with the default parameters, although it depends on the traffic shape. I'd recommend starting with defaults and adjusting if runtime errors such as packet queue overflow, etc. occur.

mdickey_splunk
Splunk Employee
Splunk Employee

Here is some additional guidance on when to change each of these parameters:

  • Increase ProcessingThreads if you see "Max packet queue size exceeded"
  • Do not change MaxPacketQueueSize unless directed by a Splunk engineer (99.9% of the time this will have no impact other than increasing memory usage)
  • Increase MaxTcpSessionCount if you see "Dropped ??? TCP session(s) due to session limit reached"
  • Increase MaxTcpReassemblyPacketCount if you see ""TCP reassembly error - maximum number of cached packets reached"

Please note that those last two conditions are almost always caused by data feed problems, such as a SPAN port that is configured to only send ingress packets, or a large number of packets being dropped (say, you are trying to send a 2 Gbps stream to a 1 Gb NIC). If you increase them and it only results in a delay in how long it takes to see the errors (and higher memory usage), this is likely the root cause.

So, 99.9% of the time the only parameter you should ever have to change is ProcessingThreads.

kwchang_splunk
Splunk Employee
Splunk Employee

In case of using a dedicated server for running streamfwd, can I think that the proper value of ProcessingThreads would be same with the number of CPU core?

0 Karma

vshcherbakov_sp
Splunk Employee
Splunk Employee

Processing threads count should be set based on traffic load; there's no benefit of having 32 threads (not uncommon for a number of CPU cores these days) for processing ~1Gbps; in fact, there's some (not significant) memory overhead caused by running multiple processing threads. I believe for most use cases there's no need to set more than 8 processing threads.

0 Karma

mdickey_splunk
Splunk Employee
Splunk Employee

In addition to memory, there is also marginal CPU overhead for having more ProcessingThreads than is necessary.

0 Karma

Heff
Splunk Employee
Splunk Employee

So what size should I increase MaxTcpSessionCount too? And what are the ramifications of increasing it? Should I be increasing it 10K blocks until the messages go away?

Thanks

0 Karma

mdickey_splunk
Splunk Employee
Splunk Employee

This is a sanity check, so setting it really high won't hurt anything unless it actually tries to go that high. You could try something like 100k or more and watch it. If it seems to settle around specific number (say 30k), then maybe lower it to 2x or so.

Note that this threshold getting exceeded can be a sign of other problems, in which case it will just grow to the max no matter how high you set it. At a certain point, it would likely just run out of memory. Common causes may be stream only getting all ingress packets or all egress packets (but not both), or lots of dropped packets causing reassembly to fail.

0 Karma

kwchang_splunk
Splunk Employee
Splunk Employee

Great. Thank you.

0 Karma
Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...