Splunk heavy forwarder throughput to indexer doesn't improve even after giving unlimited bandwidth maxKbps=0 , it's only getting 4MBps on a 24 core box with 128 GB RAM reading from nfs mount and forwarding to indexer on a 2x 10Gbps on a bonded interface.
Reading from NFS is not an issues as we were able to read/write at 30MB/s outside the forwarder using typical copy (cp)
What are the other limiting factors and what else can we tune from the Splunk side ? Please advise.
Also we noted it's using only 1 TCP connection to indexer.
MaxQueueSize=30MB was increased from the default values in the output.conf increased the performance significantly. I Would expect the forwarder to auto tune itself to meet the demand based on the system capacity and configurations, not really impressed with the defaults.
MaxQueueSize=30MB was increased from the default values in the output.conf increased the performance significantly. I Would expect the forwarder to auto tune itself to meet the demand based on the system capacity and configurations, not really impressed with the defaults.
Check what's the bottleneck by using the forwarder's metrics logs in _internal. Possible culprits include inefficient regular expressions for filtering/routing/masking.
Also check if your indexer is busy or not, the distributed management console will help there.
Once that's checked and optimized, you can tell the forwarder to use multiple pipeline sets to parallelize ingestion, processing, and indexing.
http://docs.splunk.com/Documentation/Forwarder/6.4.3/Forwarder/Configureaforwardertohandlemultiplepi...